I saw it put really well the other day. Any software has in general a set number of bugs per lines of code. Something like Debian the number of bugs goes down after release as only bugfixes occur, while anything constantly moving like a rolling release, is certain to grow in number of bugs as the less tested newer software (which generally includes more loc) is pushed. There are tradeoffs to both methods, and edge cases of course.
I'd suggest one of the fedora atomic installs, maybe even get a couple renewed Thinkpads all set up, one with kde and one with gnome and let them play with them for a few days. I was the only engineer in my company that ran Linux and the bosses only concession was that I carry a windows PC too when he was onsite with me so he'd understand what I was doing, but he provided a nice one for me so I never complained.
I second the people that said lineage OS. I am using it right now. I got this Nord phone because I knew they were easy to tinker with. I used it a bit and ended up with a newer galaxy. Well after I put lineage on the Nord every problem it had went away. Excellent battery life, runs smoothly, weekly security patches if I want etc. One thing that helped a lot was the "Aurora" app store. Let's you install apps anonymously from the play store without requiring google services. Many of them won't work due to the no google services, but a surprising amount of stuff does just fine even if it complains about it.
I had no idea it was standard. I had heard they had issues with it not being able to handle certain constructs so they were working on getting it to a place it would perform better. Has this changed? I'm not a rust person, but I intend to be. I've barely made it 1/4th way into the book (just started in the past month and I've been busy), but I have a good background in programming and so far it's been super easy. I'm really enjoying how specific the compiler is, and the binary sizes vs Go.
No offense taken we all have different knowledge and background. I have a general understanding of podman, but now I'm going to go play with it a bit at some point and get more familiar with it.
Docker is Apache 2.0 licensed. It is open source. Or at least all of the important parts. I'm not sure about docker desktop. It's partly that I just have a lot of experience with docker, and partly just that it's what is supported in most projects' documentation. The fact that a lot of the Linux foundation training uses docker is another reason I've got more experience with it.
As far as what you are talking about people have been trying for years. The Pirate Bay wanted to develop a new method of being entirely decentralized. Odysee is working on something like blockchain/torrents combined that is very interesting. We have I2P and TOR which have some of the features you mention. I'd love to see it happen where the big companies didn't control things.
There is progress though. https://letsencrypt.org/ is non-profit, and there are a variety of open source projects using this to automate TLS certificate signing.
Check out https://www.sigstore.dev/how-it-works and pay special attention to Fulcio and Rekor. It's not for web certs, but it's still a very interesting take on a certificate authority.
There's no technical reason what you are saying couldn't work. It just comes down to how do you trust it, and if you can't at all, it doesn't do much good anyway. That's the problem to be solved. You could compromise somewhere in the middle but then you have to work out what is acceptable. I suppose the level of trust could be configurable, with different nodes earning a different level of trust, and you could configure your accepted levels for DNS or CA. It's an interesting idea.
gofumpt and gofmt are the best. One of the reasons if I have a choice I'll code in go. I heard rumblings that rust was working towards having rustfmt be a standard crate.
Thanks :) Exactly. I do a lot of development and testing in an alpine linux container, simply because it has much newer versions of libraries and musl c. If I can get it to compile there, and on debian, I'm in good shape as far as compatibility goes. I used to really enjoy Arch and the rolling updates when I was younger, but I've gotten to where I don't want to mess with things constantly changing.
I use python venv for nearly everything I do python, and the way go is setup does make it extremely easy since it uses a per user environment anyway.
I knew that worked for a lot of stuff. That used to be what I'd try first but I honestly just use a venv for pretty much anything that uses pip nowadays. Still helpful to know there is a package though thanks! I intend to test it out.
Nice. I might have to clone that setup for fun. What do you use for CI? I've got jenkins running but I've been wanting to play with gitlab CI/CD too.
I do a lot of my dev work in docker containers, simply so I'm in a clean environment. Doesn't hurt in ease of backup either. No particular reason not to use docker, I also wanted to keep it kind of brief and simple. The guide I originally read that inspired me had a lot of things that were very outdated, and as I worked through getting it working on debian 12 I generally stuck with the source providers instructions when things weren't already packaged for dpkg, or alternatives were more complex.
I am currently mulling around doing extensions on this guide and adding links at the bottom, or just extending this one a bit. Also just thinking about writing a guide for other stuff too. I've been helping people on discord and irc a bit recently and some of what I know might be useful to someone.
I don't know everything by any means far from it, but I've been around since my first beOS and slackware installs a long time ago and I've picked up a lot. I worked developing and deploying pfsense images for a company years ago and have just had a lot of random experience in linux and bsds over the years.
Awesome it is good to see the bearblog getting some love. Just to keep it short mostly. I was debating adding another article continuing this one using nginx for that part. I could add a section to this one though. Or would you use something other than nginx, I'm open to suggestions. I checked yours out, it's a bit snappier than mine :) . What are you running?
Oh gotcha. It was late when I replied :p. You absolutely get security with a layer of separation from hosting remotely. I monitor my home network and have a similar setup but I don't host anything from here. I never get attacked or probed at all compared to my remote server. Just having those open ports makes you a target. Once a few scanners pick up on you hosting content you will absolutely start getting attacked. Another benefit is you don't have to have any passwords on your remote host, just an ssh key. They can bruteforce all they want, good luck without a zero day. You also keep your personal IP address out of peoples scope by not hosting from the local network.
I used to run much heavier protection on my home network, but after keeping an eye on all the logs and alerts for a while I realized I was just wasting ram and storage space mostly. Sane firewall settings is enough for a typical home, and something like crowdsec is probably overkill.
Now if you are hosting stuff it's a different story. I would actually harden my local network MORE than I did the remote one due to much more of my personal stuff being on my local network. My remote host being compromised would be a mild hassle at most, It does self backups once a week, and I have my entire site in a private git repo I sync to. It would take a few minutes to throw up another server, if my home stuff got compromised a lot more damage could be done.
My site is on a rented server at digital ocean. Some providers do more or less to protect you themselves though. I don't think digital ocean does much monitoring or protecting, I've had servers on there compromised in the past that would have been caught by my current setup. It can't hurt in any case.
I also run crowdsec on my home setup but I don't have any open ports at home and never get alerts. I had suricata running and plugged into crowdsec as well so it would handle blocking for both, but suricata never got to get any action with crowdsec blocking malicious activity, so I disabled it to save resources.
They aren't exactly CLI but I really like obsidian for taking notes. It's not open source though. Logseq is good too and is OSS. Both use markdown for formatting so if you are familiar with writing pages on GitHub you'll have no trouble. Even if not markdown is super easy to learn. That and all of your data stays local and in open formats. I edit my stuff in a terminal anyway.
Just look up obsidian OSINT on YouTube you'll find some good stuff on how to use it.
Another thought is just use markdown files and a directory structure in a private git repo. You'd be able to interact with it locally entirely in the terminal with vim etc and have the option of going online and searching or organizing etc. You could probably even use a cli browser for that part if you wanted.
I saw it put really well the other day. Any software has in general a set number of bugs per lines of code. Something like Debian the number of bugs goes down after release as only bugfixes occur, while anything constantly moving like a rolling release, is certain to grow in number of bugs as the less tested newer software (which generally includes more loc) is pushed. There are tradeoffs to both methods, and edge cases of course.