Yeah, although the neat part is that you can configure how much replication it uses on a per-file basis: for example, you can set your personal photos to be replicated three times, but have a tmp directory with no replication at all on the same filesystem.
You don't want to unless you're just planning on running a web browser or word processor or something, they're significantly buggier and massively slower than the proprietary driver, and especially so on newer cards.
What exactly are you referring to? It seems to me to be pretty competitive with both ZFS and btrfs, in terms of supported features. It also has a lot of unique stuff, like being able to set drives/redundancy level/parity level/cache policy (among other things) per-directory or per-file, which I don't think any of the other mainstream CoW filesystems can do.
The recommendation for ECC memory is simply because you can't be totally sure stuff won't go corrupt with only the safety measures of a checksummed CoW filesystem; if the data can silently go corrupt in memory the data could still go bad before getting written out to disk or while sitting in the read cache. I wouldn't really say that's a downside of those filesystems, rather it's simply a requirement if you really care about preventing data corruption. Even without ECC memory they're still far less susceptible to data loss than conventional filesystems.
I considered a KVM or something similar, but I still need access to the host machine in parallel (ideally side-by-side so I can step through the code running in the guest from a debugger in my dev environment on the host). I've already got a multi-monitor setup, so dedicating one of them to a VM while testing stuff isn't too much of a big deal - I just have to keep track of whether or not my hands are on separate keyboard+mouse for the guest :)
Functionally it's pretty solid (I use it everywhere, from portable drives to my NAS and have yet to have any breaking issues), but I've seen a number of complaints from devs over the years of how hopelessly convoluted and messy the code is.
I do this for testing graphics code on different OS/GPU combos - I have an AMD and Nvidia GPU (hoping to add an Intel one eventually) which can each be passed through to Windows or Linux VMs as needed. It works like a charm, with the only minor issue being that I have to use separate monitors for each because I can't seem to figure out how to get the GPU output to be forwarded to the virt-manager virtual console window.
I don't think training a model on hashes would be particularly useful - if the model were able to get any meaningful information out of it, that would mean the hash function itself is somehow leaking enough of the original contents to determine the image contents (which would essentially mean the hash function is broken beyond all repair)
That is very slow, unless the drive is connected over USB or failing or something, a drive of that capacity should easily be able to handle sequential writes much faster than that. How is the drive connected, and is it SMR?
I don't see anywhere close to the same amount of political memes on generic meme communities anywhere else i frequent on the internet. Lemmy feels like it consists almost exclusively of political memes.
I guess i wouldn't mind so much if they were actually funny, but they aren't. They're mostly just "haha those people dumb" with zero effort to be funny or clever or anything which could make them interesting after having seen the first 3 or 4 of them.
What exactly happens when you issue a TRIM depends on the SSD and how much contiguous data was trimmed. Some drives guarantee TRIM-to-zero, but there's still no guarantee that the data is actually erased (it could just marked as inaccessible to be erased later). In general you should think of it more as a hint to the drive that these bytes are no longer needed, and that the drive firmware can do whatever it likes with that information to improve its wear-levelling ability.
Filling an SSD with random data isn't even guaranteed to securely erase everything, as most SSDs are overprovisioned (they have more flash cells than the drive's reported capacity, used for wear leveling and the likes). even if you overwrite the whole drive with random bytes, there's a pretty good chance that a number of sectors won't be overwritten, and the random bytes would end up going to a previously unused sector.
Nowadays, if you want to wipe a drive (be it solid state or spinning rust), you should probably be using secure erase - it's likely to be much faster than simply overwriting everything, and it's actually guaranteed to make all the data irrecoverable.
having read all these other comments, i'm now feeling like i should come up with a more creative naming scheme... for what it's worth, my phone is named bob.
Yeah, although the neat part is that you can configure how much replication it uses on a per-file basis: for example, you can set your personal photos to be replicated three times, but have a tmp directory with no replication at all on the same filesystem.