I totally disagree. Git is not hard. The way people learn git is hard. Most developers learn a couple of commands and believe they know git, but they don't. Most teachers teach to use those commands and some more advanced commands, but this does not help to understand git. Learning commands sucks. It is like a cargo cult: you just do something similar to what others do and expect the same result, but you don't understand how it works and why sometimes it does not do what you expect.
To understand git, you don't need to learn commands. Commands are simple and you can always consult a man page to know how to do something if you understand how it should work. You only need to learn core concepts first, but nobody does. The reference git book is "Pro Git" and it perfectly explains how git works, but you need to start reading from the last chapter, 10 Git Internals. The concepts described there are very simple, but nobody starts learning git with them, almost nobody teaches them in the beginning of classes. That's why git seems so hard.
Sorry, I don't understand what you are talking about. Yes, you can run them in SSH session. No, you still need to have them installed on the remote machine to do this. And installing diagnostic tools is not only time consuming, sometimes it can be even impossible if you already get in troubles (and if you did not, why would you need them?).
You have a pre-installed tool and a tool that looks better but which you need to install. When you need it for a rare task, and you administer many machines, it is easier to use what you already have on each of them.
fpm is not a complete solution. It just creates a package from your files, however you need to build them in the environment of the distribution where it is supposed to work, with the same versions of dependencies. OBS is the best solution I know, but with it you need to write packaging scripts compatible with each distro you are targeting. It is quite time consuming and requires a good knowledge of native packaging tools.
You can also use any CI system that is able to execute builds in containers with your target distros. This requires a bit more scripting (just a bit), but modern CIs are easier to setup than OBS in case you need your own instance. This also allows you to use your favorite VCS and workflow you are comfortable with.
You don't have to set up your own resolver. It is enough to configure route to 1.1.1.1 via WireGuard peer. If you already use it as a default gateway, your DNS requests don't leak (I mean, Cloudflare is unable to associate them with your local IP address). To be sure, check traceroute 1.1.1.1 (on *nix system) or tracert 1.1.1.1 (on Windows), you should see your WG peer address in the output.
Random VPN service cannot determine if your DNS server trusted or not, it only checks if the server is provided by that service. When using your own WG server, such checks are useless.
27G is OK. But LVM gives you ability to resize the volume at any time if you need. So don't worry about this. Check df -h, if you have less than 10G used and you are not going to install a lot of very heavy packages (e. g. games with large resources; I mean only deb-packaged ones, not Steam etc. that go into /home), it is highly likely that you will never get in trouble because of / size.
Switch your stack. Try mobile or embedded development. Or dive into system programming. Something that is interesting for you but what you did not try before.
man ssh_config