jellyfin has a spot for each library folder to specify a shared network folder, except everything just ignores the shared network folder and has jellyfin stream it from https. Direct streaming should play from the specified network source, or at least be easily configurable to do so for situations where the files are on a nas seperate from the docker instance so that you avoid streaming the data from the nas to the jellyfin docker image on a different computer and then back out to the third computer/phone/whatever that is the client. This matters for situations where the nas has a beefy network connection but the virtualization server has much less/is sharing among many vms/docker containers (i.e. I have 10 gig networking on my nas, 2.5 gig on my virtualization servers that is currently hamgstrung to 1 gig while I wait for a 2.5 gig switch to show up) They have the correct settings to do this right built into jellyfin and yet they snatched defeat from the jaws of victory (a common theme for jellyfin unfortunately).
Ha e you looked at dockge? I like it way more than portainer, atleast for single instance. It works with normal compose files so it keeps your stuff a lot more compatible to change and its by the guy who makes uotime kuma.
just fyi, direct streaming isn't really direct streaming as you may think of it if you have specified samba shares on your nas instead of something on the vm running jellyfin. it will still pull from the nas into jellyfin and then http stream from jellyfin, whihc is super annoying.
That's pretty much exactly my story except I went with fastmail.com, mullvad for vpn (you really need to test with some script to find your best exit nodes I forget which one I used ages ago but it found me a couple of nodes about 1000 kms away from my location and in a different country that I can do nearly a gig through routinely.. Maybe it was this script? https://github.com/bastiandoetsch/mullvad-best-server) . I went with pcloud for a bit but tailscale and now currently netbird make it kind of irrelevant since its' so easy to get all my devices able to communicate back to my house file server. I want to like hetzner so bad but every time I try it the latency to north america just kills me and the north american offering was really far away and undeveloped last time Itried it
virtualize the machine with proxmox, use proxmox backup server, load vm on new system if you get catastrophic failure on the machine running the vm currently.
will it let you do rootless nfs mounts into the container? That's the showstopper for me, as that is by far the best way to just make this all work within the context of my file storage.
That's what I'm using right now. I am kind of curious if you are aware of any apk using tiny operating systems like alpine but that also have systemd? I want to experiement with quadlets/podman but don't really want to lose how simple alpine is to administer and how fast it boots.
This is what I did too, after self hosting and self hosting anonaddy for a while. I really like how it integrates into bitwarden to give me most of what I liked about anonaddy as an included thing. I also did it ofr the same reason. Too many Eh holes out there that just want to bang on the mail server all day.
I ended up on purelymail.com for my machine sending email (it's dirt cheap I think I will be under their minmimum and it will cost something like 10 dollars a year for unlimited unique email addresses for my services)..
I used to host anonaddy, I don't have the docker compose or configs anymore but I don't remember it being that bad. I stopped a couple years ago because simplelogin became included with my vpn subscription (and then I found fastmail, which has a similar feature built in so I ended up canceling simplelogin and that vpn and going to fastmail and mullvad). I basically just edite their example compose/env files and ran it behind my existing nginxproxymanager setup (that is gone now too, ended up moving to traefik but that's a story for another time).
compose example here:
https://github.com/anonaddy/docker/tree/master/examples/compose
It's really easy with headscale so I assume it must be really easy with tailscale too. How I did it was I created tiny tailscale vm to advertise the route to the ips I wanted access to on my internal lan. Then I shared the nfs share with the ip of that subnet router. now everything on my headscale network looks like it's coming from the subnet router and it works no problem (Just remember you have it setup this way in case you ever expand your userbase, as this is inherently insecure if there is anything connected to your tailscale that you don't want to have full access to your nfs shares)
I really like truenas for nas but I agree with you on running vms/docker somewhere else. I ended up keeping truenas for the mass storage (the only thing I run on it is one virtual machine to hold proxmox backupserver on an ivol). I think the much better home platform for vms is proxmox. You get ar eally nice gui that makes everything pretty easy, it's debian under the hood and with proxmox backup server you can very easily backup your virtual machines. It's also very easy to mount nfs or cifs shares into docker containers so you can keep the bulk data of your docker environment directly on the nas, which makes managing backups dead simple.
I would argue it's the correct idea up to a fairly decently sized business. Basically anything where you don't have the budget or the need for super fault tolerant systems (i.e. where it's ok to very rarely have a 20 minute to an hour outage in order to save 50k+ of IT hardware costs). You can take the above and go next step to a high availability proxmox cluster to further reduce potential downtime before you step into the realm of needing vmware and very expensive highly available and fast storage as well. It gets even more true when you start messing around with truenas and differential speed vdevs (i.e build a super fast nvme one with 10-25gig networking for some applications, a cheaper spinning rust one with maybe 10 gig networking for bulk storage. It's also nice that, by using proxmox backup server as a zvol you can take advantage of all the benefit of both zfs replication/snapshotting and cloud (jstor/wasabi s3 bucket, another truenas server at a different location) for that zvol as well as your other data you are sharing as datasets.
Get rid of iscsi. Instead, use truenas scale for nas and use a zvol on truenas to run a vm of proxmox backup server. Run proxmox on the other box with local vms and just backup the vms to proxmox backup server at a rate you are comfortable with (i.e. once a night). Map nfs shares from truenas to any docker containers directly that you are running on your vms. map cifs shares to any windows vms, map nfs shares directly to any linux things. This is way more resilient, gets local nvme speeds for the vms and still keepa the bulk of your files on the nas, while also not abusing your 1gbit ethernet for vm stuff, just for file transfer (the vm stuff happens at multi GB speeds on the local nvme on the proxmox server).
jellyfin has a spot for each library folder to specify a shared network folder, except everything just ignores the shared network folder and has jellyfin stream it from https. Direct streaming should play from the specified network source, or at least be easily configurable to do so for situations where the files are on a nas seperate from the docker instance so that you avoid streaming the data from the nas to the jellyfin docker image on a different computer and then back out to the third computer/phone/whatever that is the client. This matters for situations where the nas has a beefy network connection but the virtualization server has much less/is sharing among many vms/docker containers (i.e. I have 10 gig networking on my nas, 2.5 gig on my virtualization servers that is currently hamgstrung to 1 gig while I wait for a 2.5 gig switch to show up) They have the correct settings to do this right built into jellyfin and yet they snatched defeat from the jaws of victory (a common theme for jellyfin unfortunately).