Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)WF
Posts
3
Comments
40
Joined
2 yr. ago

  • Examples of some of the deals I've personally gotten (ymmv, some were auctions):

    • 5 x 3.84TB SAS SSDs
      • $521.54 total (stunning deal, I got lucky)
      • $104.31/drive
      • $27.16/TB
    • 5 x 960GB SAS SSDs
      • $165.17 total
      • $33.03/drive
      • $34.41/TB
    • 7 x 12TB Toshiba SAS HDDs
      • $427.31 total
      • $61.04/drive
      • $5.09/TB
    • 2 x 8TB Seagate SAS HDDs
      • $49.99 total
      • $25/drive
      • $3.13/TB
    • 2 x KTN-STL3 JBODs each including 15x3TB SAS HDDs
      • $532.73 total
      • $266.37/shelf
      • $17.76/drive bay+drive
      • $5.92/TB not including value of JBODs (~$150/each without drives)
  • In short, I'd recommend option B/C, where you buy used enterprise grade equipment, learn to run Linux, and build out that way. I can't overstate just how good a deal can be had on eBay, even from reputable sellers. This goes for everything, from the computer itself, to disk shelves, to HDDs and SSDs. Plus you're reducing on e-waste! Used HDDs are a great deal if you buy enough to run redundancy (RAID 6 or equivalent), because the seller will often include a warranty (up to 5 years!). I've only had a handful of drive failures and 0 issues with warranty refund/exchanges.


    You're running roughly the same services as I do (though a bit more storage), so if it means anything, I've ended up using the following (all purchased used) ::: spoiler spoiler

    • HP Z440 Workstation (upgraded over time)
      • CPU: Intel Xeon E5-2698 V4 (20 core)
      • RAM: 128GB DDR4 2133MT/s
      • GPU: Intel Arc A380
      • Storage: Boot SSD + HBA card for bulk storage
    • 2 x Dell EMC KTN-STL3 JBOD
      • 15 x 3.5" bays
      • Mix of HDDs spread across the two JBODs
        • 7 x 12TB
        • 6 x 14TB
        • 6 x 10TB
        • 2 x 16TB
        • 1 x 8TB
    • 1 x HP QR490A JBOD
      • 24 x 2.5" bays
      • Mix of SSDs
        • 6 x 3.84TB
        • 5 x 1TB

    :::

    Broadly, I find the following with my setup:

    • Pros
      • Easily expandable storage using a HBA
      • High reliability (ECC memory, server grade equipment)
      • Used equipment is cheap
    • Cons
      • Running mostly older-gen hardware, not cutting edge performance
      • Bulky, noisy cooling, less power efficient
  • A few things that might help narrow options down:

    • What's your budget?
    • Do you expect to host more stuff in the future? Do you need more RAM/CPU performance?
    • How much physical space do you have? Do you have a place where could store equipment if it were noisier?
    • How expensive is your electricity? Is efficiency important?
    • How much of your 100TB is full?
  • Yeah a lot of those look moderately benign (waving away media, for example). Best case scenario it's an unfortunate habit what happens to make him look like a Nazi... At the same time, I'd expect someone to break the habit to distance themselves from it.

  • "Half our students are below average!" kinda vibes - KDR necessarily means that for every person with 1.5, there is someone with a 0.67, that's just how the math works. If I'm anywhere near 1.0, I'm happy.

  • Absolutely, it's a fabulous engineering challenge, to make it work well on a hobbyist grade 3D printer with ordinary materials. Also a lesson in using the right tool for the right job (some parts are just better off milled or bought OtS)

  • I used to frequent the FOSSCAD IRC ages back as a teen. This started during the post-Liberator panic, there were talks about regulating 3D printers to not allow printing guns, etc. Designed a few things, never actually printed any of it myself, but some others did. Really got me into engineering before I exited the scene, led to actually pursuing an engineering career. Was surprised to see 3D printed gun videos so openly shared, it was pretty underground for ages there.

  • Can confirm, tastes good. This was in Papua New Guinea, the dog was donated to a function to be eaten because it kept killing people's chickens.

    What's funny is some tribes will eat dog and not cat, others eat cat and not dog, and they both think the other is weird for their choice.

  • The first two died within 30 days, the second one took about 4 months I think. Not a huge sample size, but it kind of matches the typical hard drive failure bathtub curve.

    I just double checked, and mine were actually from a similar seller on Amazon - they all seem to be from the same supplier though - the warranty card and packaging are identical. So ymmv?

    Warranty was easy, I emailed the email address included in the warranty slip, gave details on order number + drive serial number, and they sent me a mailing slip within 1 business day. Print that out, put the drive back in the box it shipped with (I always save these), tape it up and drop it off for shipping. In my case, it was a refund of the purchase pretty much as soon as it was delivered to the seller.

  • I currently have 6x10TB of these drives running in a gluster array. I've had to return 2 so far, with a 3rd waiting to send in for warranty also (click of death for all three). That's a higher failure rate than I'd like, but the process has been painless outside of the inconvenience of sending it in. All my media is replaceable, but I have redundancy and haven't lost data (yet).

    Supporting hardware costs and power costs depending, you may find larger drive sizes to be a better investment in the long term. Namely, if you plan on seeing the drives through to their 5 year warranty, 18TB drives are pretty good value.

    For my hardware and power costs, this is the breakdown for cumulative $/TB (y axis) over years of service (x axis):

  • One of my college professors was involved in the development program for ~4 years, and said it was (one of?) the most stressful experiences of his life.

    Major General Craig Olson, he (and his wife) are some of the most caring people I've met, I'm sure the weight of managing a program like that was a lot to bear. Looks like he left the program shortly after the March 2006 accident. He presented on some of the engineering challenges they faced and solved in the program (especially failure modes), but my memory is hazy.