I run a gluetun docker (actually two, one local and one through Singapore) clientside which is generally regarded as pretty damn bulletproof kill switch wise. The arr stack etc uses this network exclusively. This means I can use foxyproxy to switch my browser up on the fly, bind things to tun0/tun1 etc, and still have direct connections as needed, it's pretty slick.
I have exactly this (AM4, 7800XT, 3440x1440 monitor) running bazzite. Almost every game I have maxes 165Hz, works great for LLM inference too, really the nutso expensive stuff is only necessary for 4K+, which I find diminishing returns at present, LLM training (rent a GPU instead), and probably modern VR. Just to let you know you're barking up the right tree. :)
Oh, and the 7800XT idles / youtubes ~ 14-20W, 7 with the monitor off. I'm actually using it as a backup NAS / home server in down time, system pulls ~40-45W at the wall and I haven't even gone deep into power saving as it's a placeholder for a new homelab build that's underway.
Yeah, says as much in the article. This'll most likely, if it's not vaporware, have a 256 bus, which will be a damn shame for inference speed, just saying if they doubled the bus and sold for ≤ $1000 they'd eat the 5900 alive and generate a lot of goodwill in the influential local LLM community and probably get a lot of free ROCm development. It'd be a damn smart move, but how often can you accuse AMD of that?
it’s worth noting that RX 9070 cards will use 20 Gbps memory, much slower than the RTX 50 series, which features 28-30 Gbps GDDR7 variants.
Seeing, as the article notes, there are no 4Gb modules, they'll need to use twice as many chips, which could mean doubling the bus width (one can dream) to 512 bit (ala 5900), which would make it very tasty. It would be a bold move and get them some of the market share they need so badly.
The old adage is never use v x.0 of anything, which I'd expect to go double for data integrity. Is there any particular reason ZFS gets a pass here (speaking as someone who really wants this feature). TrueNAS isn't merging it for a couple of months yet, I believe.
Not to my mind, science requires a testable hypothesis and evidence. I would argue that merely refuting someone else's hypothesis without providing a new one doesn't meet the bar of doing science.
You'll be wanting sudo ostree admin pin 1 seeing as 0 was broken. Double check with rpm-ostree status.
Proceed to rpm-ostree update, if that does nothing it means 0 is up to date, personally I'd just wait for a new update using the working deployment, but you can blow away 0 and get it again if you're keen.
Basically, you want to shut down the database before backing up. Otherwise, your backup might be mid-transaction, i.e. broken. If it's docker you can just docker-compose down it, backup, and then docker-compose up, or equivalent.
Just use multiple database files (e.g. one for unimportant, one for important) and automate the syncing with syncthing or something so the lazy doesn't matter...
Clear as mud. (I actually dimly get it, I'm a dev, but mere mortals will be clueless and move on). Farcaster is right, you need to define terms and give examples of actually getting this up and running, you've got way too much internal context that you're not making explicit. Not an attack, trying to help, project sounds cool.
I run a gluetun docker (actually two, one local and one through Singapore) clientside which is generally regarded as pretty damn bulletproof kill switch wise. The arr stack etc uses this network exclusively. This means I can use foxyproxy to switch my browser up on the fly, bind things to tun0/tun1 etc, and still have direct connections as needed, it's pretty slick.