Skip Navigation

User banner
Posts
0
Comments
435
Joined
2 yr. ago

  • I personally have dedicated machines per task.

    8x SSD machine: runs services for Arr stack, temporary download and work destination.

    4-5x misc 16x Bay boxes: raw storage boxes. NFS shared. ZFS underlying drive config. Changes on a whim for what's on them, but usually it's 1x for movies, 2x for TV, etc. Categories can be spread to multiple places.

    2-3x 8x bay boxes: critical storage. Different drive geometric config, higher resilience. Hypervisors. I run a mix of Xen and proxmox depending on need.

    All get 10gb interconnect, with critical stuff (nothing Arr for sure) like personal vids and photos pushed to small encrypted storage like BackBlaze.

    The NFS shared stores, once you get everything mapped, allow some smooth automation to migrate things pretty smoothly around to allow maintenance and such.

    Mostly it's all 10 year old or older gear. Fiber 10gb cards can be had off eBay for a few bucks, just watch out for compatibility and the cost for the transceivers.

    8 port SAS controllers can be gotten same way new off eBay from a few vendors, just explicitly look for "IT mode" so you don't get a raid controller by accident.

    SuperMicro makes quality gear for this... Used can be affordable and I've had excellent luck. Most have a great ipmi controller for simple diagnostic needs too. Some of the best SAS drive planes are made by them.

    Check BackBlaze disk stats from their blog for drive suggestions!

    Heat becomes a huge factor, and the drives are particularly sensitive to it... Running hot shortens lifespan. Plan accordingly.

    It's going to be noisy.

    Filter your air in the room.

    The rsync command is a good friend in a pinch for data evacuation.

    Your servers are cattle, not pets... If one is ill, sometimes it's best to put it down (wipe and reload). If you suspect hardware, get it out of the mix quick, test and or replace before risking your data again.

    You are always closer to dataloss than you realize. Be paranoid.

    Don't trust SMART. Learn how to read the full report. Pending-Sectors above 0 is always failure... Remove that disk!

    Keep 2 thumb drives with your installer handy.

    Keep a repo somewhere with your basics of network configs... Ideally sorted by machine.

    Leave yourself a back door network... Most machines will have a 1gb port. Might be handy when you least expect. Setting up LAGG with those 1gb ports as fallback for the higher speed fiber can save headaches later too...

  • Permanently Deleted

    Jump
  • Ahhh, my old nemesis... Analog Gap! I knew we'd meet again some day!

  • the pirate and the thief

    Jump
  • After doing terrible crap for so long without much, if any, punishment leads to brazen and absurd tactics...

    Soon I expect something akin to them running their own marketplace scams or similar fraud just because it's so profitable vs expense/penalty.

    As you say, it's like a bad caricature of the stereotype.

  • Meta also allegedly modified settings "so that the smallest amount of seeding possible could occur," a Meta executive in charge of project management, Michael Clark, said in a deposition.

    Douchebags.

  • I agree, it's inane. But just like wanting back doors to e2e encryption, some idiot politicians will push for it despite for sure.

  • Gonna give them a spin... Egress seems at a glance more expensive for my use, but I like having options.

    Thanks!

  • I could see it turning into potentially more via EU and the excuse that VPN should be nuked from orbit... Too many politicians of the mind to backdoor everything like e2e already; this is just another thorn of many.

    Hopefully you are right and this is isolated.

  • Different groups selling different things. OpenStack still around, albeit a shell of it's former scale

  • .... You do realize that they still have hundreds of thousands of VMs in their OpenStack services? Those are VMs too.

    Hell back in 2008 Slicehost had more than 40k VMs before Rackspace bought em.

    Wait till you hear about places like AWS or Azure....

  • This! Tape is still the golden standard for high capacity!

  • I use b2 for about 15tb, still one of the cheapest really without being sketchy. Cost isn't too bad unless you are reading it often.

    As another person already noted, if you really need to back up high amounts, tape is the way to go. Plan to keep your critical stuff off site somehow too. For large amounts sneaker net is still best unfortunately.

  • I've never had issues; maybe been lucky lol.

    That said, they provide some amazingly detailed status about drives! Worth looking at the reports they post. Might get you insight into what to expect from the various manufacturers and models... Maybe avoid some junk drives in the process.

    One of the most recent of these:

    https://www.backblaze.com/blog/backblaze-drive-stats-for-q3-2024/

    Raw statistics might help cut through a lot of bias!

  • Oh important note! Check to make sure any drives you'll use are CMR, not SMR (shingled)! SMR will not function right in raid and will fail from arrays.

  • I'd start by noting that raid is more about availability, not backup... I suspect you already have that in mind but just in case. Ideally if you are up for learning ZFS, that is one of the most resilient raid tools out there. Most NAS and Unix or Linux OS will have support for this.

    Never connect RAID disks via USB... This only causes headaches.

    Avoid SATA port multipliers, these can cause problems in raid.

    SAS has the most reliable and flexible options for connectivity. Used JBOD chassis, even small, can be found cheaply and will run SATA disks well.

    As to cloud data, I strongly recommend BackBlaze. Many utilities can natively interact with it (API compatible with Amazon s3) and you can handle encryption on the fly with several sync options. They are one of the cheapest solutions, and storage is pretty much all they do.

    With pretty much any cloud storage, look at the ingress/egress cost of your data too... That is where many can bite you unexpectedly.

    Worth noting that when you get to large storage, a good organization method for your data is key so you can prune and prioritize without getting overwhelmed later... Don't want several copies of the same thing eating cash needlessly.

    Good luck! And welcome to the wonderful illness known as data hoarding!

  • I'm not trans, but I can say that no matter the various reasons, my personal one was simple:

    Everytime someone replied to one of my comments with the most asinine, absurd or just downright stupid asshole approaches, it was always hexbear.

    Literally a sigh of relief when Lemmy added personal instance blocking. Immediately hit that button and never looked back.

  • For what it's worth, your instance name is spectacular!