In general, I prefer unprivileged LXC to a full VM unless there’s some specific requirement that countermands that preference (like running an appliance or a non-Linux OS).
What I tend to do is create a new container for each service (unless there’s a related stack). If the service runs on Docker, I’ll install that right inside the container and manage it with docker compose. By installing Docker directly from get.docker.com instead of the built in packages, it pretty much works all the time.
Since each service is in its own container, restoring backups is pretty service-specific. If you wanted some kind of central control plane for docker, you could check out swarm mode.
For me it’s the Mac Finder. It’s always running so (unless it crashes) there’s no delay in opening a file manager window and, more importantly, it has built in Quicklook and Miller columns. Haven’t managed to find a good-enough implementation of either of those in Linux, so I just work around it.
In my state (Vermont), the Secretary of State has an rss feed that basically presents the results as an xml file. I’m using that to make some local results spreadsheets. Could be other states have similar things.
Awesome. I’m glad it helps. I’d be a little weary of using the same directory in multiple containers. File systems may or may not behave well with multiple machines writing to them. Not saying anything bad will happen, but do keep an eye out for issues.
I’m making some assumptions, namely that you’re using an unprivileged LXC container and the mount point is a bind mount.
Unprivileged LXC shift user ID numbers so that an escape won’t result in root access to the host. The root user (uid 0) in the container is actually uid 100000 from the perspective of the Proxmox host.
What I usually do is set ownership of my bind mounts to that high-numbered ID (so something like chown -R 100000:100000 /path/to/bind/mount) from Proxmox. Then the root user in the container will be able to set whatever permissions you need directly.
Since you're interested in this kind of DIY, approach, I'd seriously consider thinking the whole process through and writing a simple script for this that runs from your desktop. That will make it trivial to do an automatic backup whenever you're active on the network.
Instead of cron, look into systemd timers and you can fire off your script after, say, one minute of being on your desktop, using a monotonic timer like OnUnitActiveSec=60.
Thinking through the script in pseudo code, it could look something like:
This would pull the back from your server to your desktop and, if the backup failed, use a service such as ntfy.sh to notify you of the problem.
I think that would pretty much take care of all of your requirements and if you ever decided to switch systems (like using zfs send/recv instead of rsync), it would be a matter of just altering that one script.
Dokuwiki (dokuwiki.org) is my usual go-to. It’s really simple and stores entries in markdown files so you can get at them as plain text files in a pinch. Here’s a life lesson: don’t host your documentation in the machine you’re going to be breaking! Learned that the hard way once or twice.
For reverse proxies, I’m a fan of HAProxy. It uses pretty straightforward config files and is incredibly robust.
I can’t give direct experience here, but this is exactly the use case I’ve been meaning to spin up mailpiler for: https://www.mailpiler.org/. One of these days that will rise to the top of the priority list.
If you want an image, it doesn’t matter what the underlying file system is. You should be able to use a tool like Clonezilla and get a 1:1 copy. Depending how you’ve set up partitioning, you could also use sgdisk to set up the proper partitions and zfs send/recv for the new data portion of the drive and install a boot loader. That’s probably the way I’d go in this instance.
My go-to for this is a plain Debian or Ubuntu container with Cockpit and the 45Drives file sharing plugin. It’s pretty straightforward and works pretty well.
To amplify RedWeasel’s very good answer, fstab runs as root and unless you specify otherwise, the share will mount with root as the owner on the local machine. From the perspective of the Samba server, it’s the Jellyfin user accessing the files, but on the local machine, but local permissions come into play as well. That’s why you can get at the files when you connect to the share from Dolphin in your KDE system—it’s your own user that’s mounting the share locally.
You can set maintenance schedules in Uptime Kuma and alerts won’t be sent out during those times. I use that for when my backup routines run each night. That seems like a decent cross-platform work around.
I administer a handful of FreePBX systems that run pretty smoothly and are relatively friendly to use. Crosstalk Solutions on YouTube has a bunch of videos on the software if you want to get up to speed about how everything works.
Not sure how your stack works together, but sudo will let you run particular commands as a different user and you can be pretty specific with the privileges. For example you can have a script that’s only allowed to run docker compose -f /path/to/compose.yml restart containername as a user in the docker group. Maybe there’s some docker-specific approach, but this should work with traditional Unix tools and a little scripting.
Cool. That looks right. Have you checked that the bridge is set up properly and that the router doesn’t have anything silly going on for that subnet?
PVE’s network settings are in /etc/network/interfaces and that’s where you can see how the bridge is set up.
It might be beneficial to know more about your network. Is this the only subnet or do you have a bunch of VLANs? Can other devices on the subnet ping outbound? Have you looked at the firewall on PVE?
In general, I prefer unprivileged LXC to a full VM unless there’s some specific requirement that countermands that preference (like running an appliance or a non-Linux OS).
What I tend to do is create a new container for each service (unless there’s a related stack). If the service runs on Docker, I’ll install that right inside the container and manage it with
docker compose
. By installing Docker directly from get.docker.com instead of the built in packages, it pretty much works all the time.Since each service is in its own container, restoring backups is pretty service-specific. If you wanted some kind of central control plane for docker, you could check out swarm mode.