Before your change to Linux
30p87 @ 30p87 @feddit.de Posts 1Comments 631Joined 2 yr. ago
It's a Dell Latitude 5420, with a Broadcom Corp. 58200. Per https://wiki.archlinux.org/title/Laptop/Dell#Latitude, the 5420 is supported with libfprint-2-tod1-broadcom. And of course, I use Arch btw.
The local backups are done hourly, and incrementally. They hold 2+ weeks of backups, which means I can roll back versions of packages easily, as the normal package cache is cleaned regularly. They also prevent losing individual files accidentally through weird behaviour of apps, or me.
The backups to my workstation are also done hourly, 15 minutes shifted for every device, and also incrementally. They protect against the device itself breaking, ransomware or some rouge program rm -rf'inf /, which would affect local backups too (as they're mounted in /backups, but those are mainly for providing a file history as I said.)
As most drives are slower than the 1 Gbps ethernet, the local backups are just more convenient to access and use than the one on my workstation, but otherwise exactly the same.
The .tar.xz'd backups are actual backups, considering they are not easily accessible, and need to be unpacked and externally stored.
I didn't measure the speeds of a normal SSD vs the raid - but it feels faster. Not a valid argument, of course. But in any way, I want to use it as Raid 0/Unraided for more storage space, so I can have 2 weeks of backups instead of 5 days (considering it always keeps space for 2 backups, I would have 200- GB of space instead of 700+).
The latest hourly backup is 1.3 GB in size, but if an application is used which has a single, big DB that can quickly shoot up to dozens of GB - relatively big for a homeserver hosting primarily my own stuff + a few things for my father. Like synapses' DB has 20 GB alone. On an uneventful day, that would be 31 GB. With several updates done, which means dozens of new packages in cache, that could grow to 70+GB.
Like nearly all drivers lol
Drivers I needed to pay special attention to:
- NVidia (we all know the official stance on that topic)
- e1000e needs patching because my Laptops NIC somehow reports the wrong NVM checksum
- Some obscure chinese "USB to DVI-D" adapter
- The fingerprint sensor in my Laptop, as it's still experimental
- I have had a lot of bad experiences with paid iOS apps, very little with free apps on Android - and even if so, there are dozens of FOSS alternatives
- Even worse
- Depends on which phone you choose; my 200€ Moto does have a pretty bad camera, but pretty good specs overall
If it fails, I will just throw in a new SSD and redo the backup. I sometimes delete everything and redo it anyway, for various reasons. In any case, I usually have all copies of all files on the original drive, as local backup on the device and backup on the workstation. And even if those three should fail - which I will immediately know, due to monitoring the systemd job - I still have daily backups on two different, global hosters as well as the seperate NAS. The only case in which all full backups would be affected would be a global destruction of all electronics due to solar storms or a general destruction of earth, in which case that's the least of my problems. And in case the house burns down, and I only have the daily backups, potentially losing 24 hours of data, that's also the least of my problems. Yes, generally using Raid 5 for backups is better, but in my case I have multiple copies of the same data at all times, surpassing the 321 rule (by far - 622, and soon 623). As all of my devices are connected via Gigabit, getting backups from eg. the workstation after the PC (with backups) died is just as fast as getting backups from the local PC backup Raid itself. And using Raid 0 is better (in speeds) than just slapping them together in series.
Because that's what Raid 0 for, basically adding together storage space with faster reads and writes. The local backups are basically just to have earlier versions of (system) files, incrementally every hour, for reference or restoring. In case something goes wrong with the main root NVMe and a backup SSD at the same time (eg. trojan wiping everything), I still have exactly the same backups on my "workstation" (beefier server), on also a RAID 0 of 3 1 TB HDDs. And in case the house burns down or something, there are still daily full backups on Google Cloud and Hetzner.
256 GB root NVMe, 1 TB games hdd, 3* 256 GB SSD as raid 0 for local backups, 256 GB HDD for data, 256 GB SSD for VM images.
Microsoft has managed to implement dependency hell lmao
As someone who routinely installs new Laptops for various reasons:
Installing
- Preinstalled Windows is unusable, due to preinstalled spyware
- No torrents
- No multiple versions
- No real support for actually chaning the locale, what you download is what you get. Even if that means redownloading 5 GB for every language, even though the interesting parts are just a few language files, which every OS can also replace while running (Note: OSes, not spyware with a program loader strapped to it)
- No live version
- Unnecessarily complex/long installation (Locale settings being required two times, circumventing the M$ account with cmd, denying all spying stuff)
- Installer does not have drivers for many things eg. some Touchpads, special storage setups etc.
- Installing takes a long time overall
- Removing bloat, with varying success (sometimes uninstalling Edge is one click, sometimes it requires powershell hacks) takes ages (my hand always hurts afterwards because removing one thing takes three clicks at different locations)
- Installing stuff is extremely annoying, inconsistent and insecure (VLCPlus ...)
- Everyone loves hunting down 10 different obscure drivers from various websites, each with unique installers, right?
- Windows fucks itself up within a few days with a non-insignificant chance ... eg. by entering S-Mode (halfway) somehow
Usage
- It may be in part due to me being used to a tiling WM with dozens of workspaces, but even with KDE I have much better workflow - somehow, Windows' way to multitask is really strange to me, and I can only use it like a 70 year old with only 10% sight in one eye and 0% in the other: very slow and inefficiently
- You can't integrate anything with anything, except if you have dozens of accounts of services, some even with costs, and only use everything exactly like daddy manufacturer wants you to
- Literally no support. Windows fucks itself up in so many ways, and the only "reliable" fix is a reinstall
- Even with the dumbed down nature of Windows, users are morons. I'd rather teach my grandparents (including my very loud grandfather and said nearly-blind grandmother) Linux from scratch (yes, also LFS) than teach them the "correct way" to use Windows
- Even when knowing how to use Windows properly, with all tricks applied, it's less powerful than a pregnancy test running BASIC
- Paying 250+$ to get served ads to pay even more, money and data, is obviously stupid
I can't even manage that in my native language.
You misspelled KeePass
I wish everything would just default to a unix socket in /run, with only nginx managing http and stream reverse sockets.
'Yes boss, I need 16-Bit, 32-Bit and 64-Bit Arm and x86_64 ASM as well as MySQL, SQLite, Postgres, Firebird, Mongo and all other stuff too, so I need a lot of computers ... of course all with Threadripper PRO 7995WX's.
So I can just return the phone after 4 years, and get a new one, to have updates for more than 2 years? Nice! Or they could open up the Bootloader again, like my Moto Edge 20 has.
Layer 40320
Netcat is basically just a utility to listen on a socket, or connect to one, and send or receive arbitrary data. And as, in Linux, everything is a file, which means you can handle every part of your system (eg. block devices [physical or virtual disks]) like a normal file, i.e. text, you can just transfer a block device (e.g. /dev/sda3) over raw sockets.
Nah, it's probably more efficient to .tar.xz it and use netcat.
On a more serious note, I use sftp for everything, and git for actual big (but still personal) projects, but then move files and execute scripts manually.
And also, I cloned my old Laptops /dev/sda3 to my new Laptops /dev/main/root (on /dev/mapper/cryptlvm) via netcat over a Gigabit connection with netcat. It worked flawlessly. I love Linux and its Philosophy.
In the modern world it's completely subjective.
The lowest-level language is probably ASM/machine code, as many people at least edit that regularly, and the highest-level would be LLMs. They are the shittiest way to program, yes, but technically you just enter instructions and the LLM outputs eg. Python, which is then compiled to bytecode and run. Similar to how eg. Java works. And that's the subjective part; many people (me included) don't use LLMs as the only way to program, or only use them for convenience and some help, therefore the highest level language is probably either some drag-and-drop UI stuff (like scratch), or Python/JS. And the lowest level is either C/C++ (because "no one uses ASM anyway"), or straight up machine code.
Then, he inserted a trojan in multiple steps until he gained RCE as root.
Windows 10, but before Windows 11 was even leaked I believe.