Skip Navigation

Posts
35
Comments
563
Joined
2 yr. ago

  • I see what you mean. Basically synch on your own...

  • I want to try this...

  • Good point, been using it for years, but trying to get rid of NextCloud that, while being an amazing tool, is overkill for my use case.

    Migrated to a WebDAV server, Card/CalDAV server and Joplin. So far so good, but missing the ease to access notes from web too for a quick glance when on a PC (for example, to copy&paste some stuff) without installing the desktop app (work PC or friend's PC).

  • I use restic. Nice simple and works. Synchthing to keep files synched between phone and server. Restic for server backups.

    Sftpgo is a very interesting backend + web interface if you look for something to access/copy/backup your files remotely.

  • In those cases I would have used the sources which are available...

    Not having a binary release (docker doesn't count for practical reasons for my goal) hinders bare metal installation as the biggest limiting factor is building the sources, or better having the proper instructions to build them properly.

  • I am playing with SFTPGO, while not being a backup solution its a great backbend supporting sftp, WebDAV and much more that you can bind with something on client side.

    Currently using synchthing, but planning to switch since that is not a backup tool.

  • Yes I can pull them out but upgrading would be a mess overtime.

    If they released also binaries, I can write my install/upgrade scripts and voila, bare metal easy.

    I already do this for a few tools not available trough gentoo package manager

  • Us novices?

    No, it's not that. The point is not that using a sub domain is easy or not, you might not have access to using one or maybe your setup is just "ugly" using one or you just don't want to use one.

    Its standard practice in all web based software to allow base URLs. Maybe the choice of framework wasn't the best one from this point of view.

    As for docker, deploying immich on bare metal should be fairly easy, if they provided binaries to download. The complex part is to build it not deploy.

    But you gave me an idea: get the binaries from the docker image... Maybe I will try.

    Once you have the bins, deploying will be simple. Updating instead will be more complex due to having to download a new docker image and extract again each time.

  • Emerge -v qbittorrent

  • Immich is very good for photo backup. I would say it's it best use. Be aware of the limitations above tough.

    I find very annoying not having a bare metal installation way. Will try to make it work in the future maybe.

  • Self signed certs needs to be allowed explicitly. If the app didn't took those into consideration then there is not much you can do.

    Another point against immich I guess if you need self signed.

    I will try to support immich actively in the future, even if my free time is really small nowadays.

  • Why self sign? Use Let's Encrypt, its free and works just great. That's what I do

  • I fully agree...

  • Yes I mean https://mydomain/immich, like all other self-hosted tools do.

    Auth just with name, please why email??? And maybe get that name from already performed proxy auth, not many tools does, but some do. Its a neat feature.

    But immich requires an EMAIL and will also enforce it, so you cannot fool it with a plain username. I just want consistency with the rest of the world.

    Docker is enough? Maybe, but bare metal should always be an option. Always. Just write down some basic steps or publish the build script already in place for releases.

    And for albums I mean that it should recognize that external libraries have already albums in the form of subfolders.

    I will open tickets and feature requests of course, this post is here to share my findings and your aggressive reply is out of place.

    By the way, I am talking with strangers on the internet and the objective is to help solve those issues.

  • No it doesn't. Seems devs don't care. Anyway, testing immich now, let's see.

  • Need to study podman probably, stuff running as root is my main dislike.

    Probably if in only used docker images created by me I would be less concerned of losing track of what I am really deploying, but this would deflect the main advantage of easy deploy?

    Portability is a point I didn't considered too.. But rebuilding a bare metal server properly compatimentized took me a few hours only, so is that really so important?

  • Is all this true? Its a perspective I didn't considered, but feels true, don't know if it is tough.

  • An 8088 compatibile system. It had a NEC v30 CPU which was a full replacement for a real Intel 8088, but clocked at 8Mhz instead of 4.77. I had 640Kb of ram and a CGA video card & monitor. I remember playing Eye Of The Beholder 2 (I had 20mb hard drive) toward the end of its life (after my father bought a mouse, which was novelty) and it was so slow (like 30seconds between movements) that on more difficult combats I had to copy the savegame to a friend 286....

    I remember the upgrade to msdos 3.2....

    I had both 3.14 and 5.25 floppy drives, but the latter I never really used.