I personally like to keep it on. Most of my messaging is with family and friends and it's good to know if someone read or hasn't read my message.
Especially if things are time critical. Picking someone up? Asking if they need anything from the supermarket? If I see that they read the message I know that they are going to reply in a moment. If they didn't even read the message I won't have to wait around / can guess that they are currently in the car or wherever.
Sometimes you also have a spotty connection, so the received + read receipt can tell you if they actually got your message.
In general if someone sends me a message and I read it.. I'm going to fucking reply to it (if I'm not super busy, and even then I might send a quick message back). I seriously don't get people who just leave things on read and then forget about it.
And oh shit, when you say you built it yourself: Did you use the Motherboard Standoffs (if they weren't already in place in your case)? If you didn't use them you might get random short circuits.
Have you actually checked that all the cables are firmly in each socket? On both sides, at the back of the PSU and on each device (Motherboard, GPU, ..)?
But the NAS is in your house.. which basically means if it gets flooded/burns down all your data is gone too.
I already have my data on my PC, a second backup inside the same house isn't worth that much. But instead of relying on a cloud service I just rent a virtual server (for various things) and use Seafile to keep my data in sync.
PC breaks? House burns down? My data is on my own server in a datacenter. My server gets cancelled? My data is on my PCs.
So even with your NAS you are 100% reliant on a cloud backup still, so why did you get the NAS when you already have a copy of your data on your devices?
Are you sure you properly deleted them? Reddit rate limits you to about one edit/delete every 3 seconds. If you go faster than that the deletes fail.
All the comments I have overwritten stayed that way, it was just difficult to reach them all (as different comments show up under "Top", "Controversial", "New" and so on).
On the other hand you can lose your email address at any time if you don't own the domain. So if Google decides they don't like something you wrote your @gmail.com address could be gone tomorrow. And with it all your accounts you set up (as you need email usually to login or do changes).
The whole e-mail ecosystem sucks :-/
My self-hosted mail server works fine for now, but that could change at any moment.
I'd wager comments are preserved and don't get cleaned up over time. Because if content gets deleted the instance has to federate the deletion to other instances to clean everything up (like an event system). If the instance just vanishes there would be no deletion request happening.
If deletion was automatic when an instance goes down we'd have already lost thousands of comments due to the outages lately :)
That's the case when an instance just straight up shuts down, right?
But if a user deletes their comment this also gets propagated to other instances. Do instance admins have a nuke button to initiate a delete for all content?
.ml is a bit better stability wise (though it also had worse days..) and on top of that they have a slur filter. So if someone calls you 'bitch' ('bi.tch') all you see is 'removed' on lemmy.ml. While the entire rest of the fediverse can read what I just wrote :-/
Where do you find that this CPU “only has 6 3D cores”?
It's common knowledge. 7800X3D = 8 3D cores, 7900X3D = 6 3D cores and 6 normal cores (= 12), 7950X3D = 8 3D cores and 8 normal cores (= 16).
So if you mostly game and don't need the CPU for productivity tasks you should 100% grab the 7800X3D. If you need a lot of cores then grab the 7950X3D. The 7900X3D is garbage in the middle.
Someone already pointed out that a 4090 is a massive upgrade over a 4080, no clue what benchmarks you looked at.
So another suggestion: Why pick a 7900X3D? It's the worst of both worlds. It only has 6 3D cores for gaming, which might not be enough in the near future (Look up performance benchmarks between a 5600X vs a 5800X for example, there are already games that benefit from more than 6 cores).
If gaming is a focus: Pick a 7800X3D so you get a full 8 3D cores to work with. If productivity is a focus (with gaming on top) splurge for a 7950X3D. You're already spending an insane amount of money, you might as well get a decent CPU.
Besides that: You are wasting a ton of money on the SSDs. Grab a fast one like that for your Windows drive and for gaming, but for storage there's much cheaper options (that still deliver 3-6 GB/s, if you even need that). 128 GB of RAM is excessive too, except you have a very clear use-case for it.
Like Asian earwax is super dry and flaky, while European earwax looks like yellow green toxic goop.
So you have different genes and on top of that different diets (I had Indian neighbors once, you could smell the curry in the entire hallway of the building 24/7. But they obviously use a ton of spices when cooking).
As a central European Caucasian guy I personally start to smell really bad without deodorant just after a day or so. No matter how often I shower or what I eat. I also tried to switch to deodorant without aluminum and that didn't work out at all :-/
Ever left the house? You'll change your opinion right quick when you're behind a random guy in the grocery store and you get a strong whiff of sour milk that has been out in the sun for a week.
TDD is great when you have a very narrow use case, for example an algorithm. Where you already know beforehand: If I throw A in B should come out. If I throw B in C should come out. If I throw Z in an error should be thrown. And so on.
For that it's awesome, which is mostly algorithms.
In real CRUD apps though? You have to write the actual implementation before the tests. Because in the tests you have to mock all the dependencies you used. Come up with fake test data. Mock functions from other classes you aren't currently testing and so on. You could try TDD for this, but then you probably spend ten times longer writing and re-writing tests :-/
After a while it boils down to: Small unit tests where they make sense. Then system wide integration tests for complex use-cases.
Multi-threading is difficult, you can't just slap it on everything and call it a day.
There are languages where it's easier (Go, Rust, ..) but parallelism is an advanced feature. Do it wrong and you get race conditions or dead locks. There is a reason you learn about this later in programming, but you do learn about it (and get to use it).
When we're being honest most programmers work on CRUD applications, which are highly sequential, usually waiting on IO and not CPU cycles and so on. Saving 2ms on some operations doesn't matter if you wait 50ms on the database (and sometimes using more threads is actually slower due to orchestration). If you're working with highly efficient algorithms or with GPUs then parallelism has a much higher priority. But it always depends on what you're working with.
Depending on your tech stack you might not even have the option to properly use parallelism, for example with JavaScript (if you don't jump through hoops).
I'm only upvoting this so I'm not the only one today who regrets to have eyes.