I keep my dotfiles in a got repo and just do a git pull your update them. That could definitely be a cron job if you needed.
SSH keys are a little trickier. I’d like to tell you I have a unique key for each of my desktop machines since that would be best practice, but that’s not the case. Instead I have a Syncthing shared folder. When I get around to cleaning that up, I’ll probably do just that and keep an authorize_keys and known_hosts file in git so I can pull them to needed hosts and a cron job to keep them updated.
I have a couple APC and Tripplite rack-mount UPSs as well as a couple CyberPower desktop models. Of the bunch, I’m most happy with the Tripplites. I’ll check the model and add that in.
On the crazy low-scale end, I have a no-name dash cam that I found for $5 on a tchotchkes table at my local Chinese takeout place. It works perfectly with my Linux desktop—both reading from the SD card and streaming directly via USB. Not at all what you’re looking for, but it makes me think that if this random junk works, more mainstream devices probably do too.
The thing that comes to mind is setting up a block device as an iscsi target on your NAS. That would present the storage to you as though it’s another hard drive that you can format and map in windows. Then you can save vms there as though it were directly connected.
That’s a bummer. Might be worth running an iperf test between your machine and your dad’s just to get a baseline speed. Could be something in between your two networks is routing slowly.
Do you want the entirety of a directory system uploaded? If that’s the case you could use syncthing to just share the source directory. Then your dad wouldn’t have to move anything. Add in something like Tailscale or ZeroTier and you could control all the syncthing settings from the webui.
Not a symlink, but you can add source /path/to/aliases one your bashrc file to load them from another file. I do that and keep all of my dot files in a hit repo.
How about an alternate route? If transferring information between computers is the goal, you could skip the external drive altogether and put syncthing on both machines. Then you could just share the appropriate directories between the two without the go-between.
I think it’s just a matter of getting used to it. I had the same issue at first and the more I used the command line, the more I started to prefer it to GUI apps for certain tasks.
A couple things that I use all the time:
tab completion is incredible
cd - goes back to the last directory you were in (useful for bouncing back and forth between locations)
!$ means the last argument. So if you ls ~/Downloads and then decide you want to go there, you can cd !$.
:h removes the last piece of a path. So I can do vim /etc/network/interfaces and then cd !$:h will take me to /etc/network.
This is excellent timing for me. I was just taking a break from working on setting up whisper.cpp with a web front end to transcribe interviews. This is a much nicer package than I ever had a chance of pulling together. Nice work!
Awesome. I haven’t had the chance to play with one yet, but I’ve been eying them and have heard nothing but good reports. Please do let us all know how it goes.
Don’t have much experience with synology, let lone replacing their boards, but I wonder if something like the Zimaboard (https://www.zimaboard.com) might do the trick. It’s x86-based and has a pcie slot.
Another option that’s pretty much perfect as long as you don’t need to provide remote support for macs is Remotely (https://github.com/immense/Remotely). You can selfhost it and it works kind of like teamviewer, so pretty simple from the client standpoint.
I’m a big fan of TrueNAS and Proxmox and I think OMV will be great for you.
In the order you asked:
I think OMV is a decent choice, but there isn’t really a bad choice, just better fits for personal tastes.
The upside compared to vanilla Debian or Ubuntu is a solid web-UI for management (though you could get that in the form of Cockpit) and a complete system philosophy. The downside is less flexibility. Any system someone else makes locks you into doing things their way.
If you don’t have a desire to run VMs or set up clusters, or have by-default ZFS on root, you won’t be missing anything.
I keep my dotfiles in a got repo and just do a
git pull
your update them. That could definitely be a cron job if you needed.SSH keys are a little trickier. I’d like to tell you I have a unique key for each of my desktop machines since that would be best practice, but that’s not the case. Instead I have a Syncthing shared folder. When I get around to cleaning that up, I’ll probably do just that and keep an
authorize_keys
andknown_hosts
file in git so I can pull them to needed hosts and a cron job to keep them updated.