I said this before on another thread, but the only time sfc /scannow actually did something was when I had a machine with a drive that had a few bad blocks.
And of course it didn't actually fix anything because a system DLL was corrupt so DISM couldn't even repair the system, meaning the only solution was to reinstall windows.
You might want to check what the actual hardware is first. You'll probably be fine, but client 802.11 hardware can sometimes be underwhelming for hosting because they don't have good stuff like beefed up MuMIMO.
Although that's assuming you will have a lot of traffic going through it, so you could always just test throughput and latency with iperf to see how well it functions.
I am very slightly annoyed that people haven't moved onto Opus which gives you better compression and quality than MP3. MP3s are still useful for any older devices that have hardware decoding like radio sets, handheld players, etc. Otherwise, every modern device should support Opus out of box.
Hilariously, x264 has the same problem where there are direct upgrades with H.265 and AV1, but the usage is still low due to lack of hardware accelerated encoding (especially AV1), but like everyone uses FLAC for the audio which is lossless lol.
It depends on what it is really + convenience. There are lots of morons out here running basic info sites on full beefy datacenter VMs instead of a proper cloud webhost service.
The most you'd be getting out of cloud is reliability. Self host assumes you don't have any bottlenecks (easy enough to pass), but also 99% uptime which is impossible unless you are running with site redundancy (also possible, but I doubt how many people own multiple properties with their own distribute or private cloud solution).
if 95% uptime is acceptable, and you don't live in an area with outage issues from weather, I'd say go for it. Otherwise, you can find some pretty cheap cloud solutions for basic websites. Even a cheapo VPS would probably work just fine.
Ruin it by making it into the Elite Dangerous universe.
Who cares about exploration and protecting emerging civilizations when you can make funny images with the galaxy route map and participate in huge system control wars for void opals lol.
Or give random items to Thargoids for a totally scientific process of investigation.
AT&T still hasn't installed fiber in my old neighborhood where one of their lines cuts straight through a row of houses that conveniently do get fiber, while everyone else is stuck on cable.
Did I mention they received billions in federal funding to upgrade everyone?
I have run photoprism straight from mdadm RAID5 on some ye olde SAS drives with only a reduction in the indexing speed (About 30K photos which took ~2 hours to index with GPU tensorflow).
That being said I'm in a similar boat doing an upgrade and I have some warnings that I have found are helpful:
Consumer grade NVMEs are not designed for tons of write ops, so they should optimally only be used in RAID 0/1/10. RAID 5/6 will literally start with a massive parity rip on the drives, and the default timer for RAID checks on Linux is 1 week. Same goes for ZFS and mdadm caching, just proceed with caution (ie 321 backups) if you go that route. Even if you end up doing RAID 5/6, make sure you get quality hardware with decent TBW, as sever grade NVMEs are often triple in TBW rating.
ZFS is a load of pain if you're running anything related to Fedora or Redhat, and the performance implications from lots and lots of testing is still arguably inconclusive on a NAS/Home lab setup. Unless you rely on the specific feature set or are making an actual hefty storage node, stock mdadm and LVM will probably fulfill your needs.
Btrfs has all the features you need but is a load of trash in performance, highly recommend XFS for file integrity features + built in data dedup, and mdadm/lvm for the rest.
I'm personally going with the NVME scheduled backups to RAID because the caching just doesn't seem worth it when I'm gonna be slamming huge media files around all day along with running VMs and other crap. For context, the 2TB NVME brand I have is only rated for 1200 TBW. That's probably more then enough for a file server, but for my homelab server it would just be caching constantly with whatever workload I'm throwing at it. Would still probably last a few years no issues, but SSD pricing has just been awful these past few years.
On a related note, Photoprism needs to upgrade to Tensorflow 2 so I don't have to compile an antiquated binary for CUDA support.
Remember when Hamas was democratically voted into power in Gaza because everyone knew the PA was a full of shit organization run under the complete control of Israel?
And then the PA refused to recognize the vote outcome, which was shortly followed by a military incursion by Israel because God forbid the Palestinians actually form their own government.
There's an inside joke about Iran that after the Shah was deposed, the only thing that changed was all the illicit activity happened inside instead of outside.
Its pretty believable considering you can find plenty of Muslim Iranians who openly drink and party, hence why they're actually the least likely to be seen amongst a group of Muslims.
PSA to always run a full length SMART check for any drives you buy, even from OEM. The short test and log are not enough, I have bought faulty drives that someone had reset the logs and power on hours.
All passed short SMART test, but failed long SMART test after only a few minutes. Found just one drive that the skrub forgot to wipe and the log showed 6 continuous years of power on usage.
Even from OEM, you will at least know if the hardware is DOA which you can then RMA.
I said this before on another thread, but the only time sfc /scannow actually did something was when I had a machine with a drive that had a few bad blocks.
And of course it didn't actually fix anything because a system DLL was corrupt so DISM couldn't even repair the system, meaning the only solution was to reinstall windows.