Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)CO
Posts
0
Comments
255
Joined
2 yr. ago

  • For recovering hardware RAID: most guaranteed success is going to be a compatible controller with a similar enough firmware version. You might be able to find software that can stitch images back together, but that's a long shot and requires a ton of disk space (which you might not have if it's your biggest server)

    I've used dozens of LSI-based RAID controllers in Dell servers (of both PERC and LSI name brand) for both work and homelab, and they usually recover the old array to the new controller pretty well, and also generally have a much lower failure rate than the drives themselves (I find myself replacing the cache battery more often than the controller itself)

    Only twice out of the handful of times I went to a RAID controller from a different generation

    • first time from a mobi failed R815 (PERC H700) physically moving the disks to an R820 (PERC H710, might've been an H710P) and they were able to foreign import easily
    • Second time on homelab I went from an H710 mini mono to an H730P full size in the same chassis (don't do that, it was a bad idea), but aside from iDRAC being very pissed off, the card ran for years with the same RAID-1 array imported.

    As others have pointed out, this is where backups come into play. If you have to replace the server with one from a different generation, you run the risk that the drives won't import. At that point, you'd have to sanitize the super block of the array and re-initialize it as a new array, then restore from backup. Now, the array might be just fine and you never notice a difference (like my users that had to replace a failed R815 with an 820), but the result pattern is really to the extremes of work or fault with no in between.

    Standalone RAID controllers are usually pretty resilient and fail less often than disks, but they are very much NOT infallible as you are correct to assess. The advantage to software systems like mdadm, ZFS, and Ceph is that it removed the precise hardware compatibility requirements, but by no means does it remove the software compatible requirements - you'll still have to do your research and make sure the new version is compatible with the old format, or make sure it's the same version.

    All that's said, I don't trust embedded motherboard RAIDs to the same degree that I trust standalone controllers. A friend of mine about 8-10 years ago ran a RAID-0 on a laptop that got it's super block borked when we tried to firmware update the SSDs - stopped detecting the array at all. We did manage to recover data, but it needed multiple times the raw amount of storage to do so.

    • we made byte images of both disks in ddrescue to a server that had enough spare disk space
    • found a software package that could stitch together images with broken super blocks if we knew the order the disks were in (we did), which wrote a new byte images back to the server
    • copied the result again and turned it into a KVM VM to network attach and copy the data off (we could have loop mounted the disk to an SMB share and been done, but it was more fun and rewarding to boot the recovered OS afterwards as kind of a TAKE THAT LENOVO...we were younger)
    • took in total a bit over 3TB to recover the 2x500GB disks to a usable state - and took about a week of combined machine and human time to engineer and cook, during which my friend opted to rebuild his laptop clean after we had images captured - to one disk windows, one disk Linux, not RAID-0 this time :P
  • 10001

    Jump
  • My low level is a tad rusty from when I learned the C side in school, but if I recall the not operator resolves as a single Boolean (0 or 1 in true C), whereas compliment comes back as however many bits you put in - a not operation per bit.

    In C, the not operator is ! and the compliment operator is ~

  • Sadly the so-called "smart TV" is becoming the norm. Companies add unnecessary crap to TVs that's often as slow as your car's factory infotainment system, and when they feel like not upgrading the software anymore for security issues in a few years, it's a permanent security hazard until you disconnect it from the network.

    I have a Vizio TV from several years ago with Yahoo branded smart functions (that should date it) that I need to factory reset because I can't find the WiFi password erase.

  • Before anyone gets too deep I'd like to point out that this is just about hosting vector tiles, the actual tile gen is a separate project. Not to say that hosting large sets of files is trivial, just that there's more to the picture than one repo.

    https://github.com/onthegomap/planetiler

  • I mean... DX 9, 10, and 11 were all released prior to Nadella being CEO/chairman.

    But in software, it's very commonplace for library versions not to be backwards compatible without recompiling the software. This isn't the same thing as being able to open a word doc last saved on a floppy disk in 1997 on Word 365 2024 version, this is about loading executable code. Even core libraries in Linux (like OpenSSL and ncurses) respect this same schema, and more strongly than MS.

    Using OpenSSL as an example, RHEL 7 provides an interface to OpenSSL 1.0. But 1.1 is not available in the core OS, you'd have to install it separately. 1.1 was introduced to the core in RHEL 8, with a compatibility library on a separate package to support 1.0 packages that hadn't been recompiled against 1.1 yet. In RHEL 9, the same was true of OpenSSL 3 - a compatibility library for 1.1, and 1.0 support fully dropped from core. So no matter which version you use, you still have to install the right library package. That library package will then also have to work on your version of libc - which is often reasonably wide, but it has it limits just the same.

    Edit because I forgot a sentence in the last paragraph - like DirectX, VC++, and OpenGL, you have to match the version of ncurses, OpenSSL, etc exactly to the major (and often the minor) version or else the executable won't load up and will generate a linking error. Even if you did mangle the binary code to link it, you'd still end up with data corruption or crashes because the library versions are too different to operate.

  • DirectX 12 was released in 2015 with Windows 10, so it's unlikely to have been ported back to 8.1 and lower.

    MS usually only does current+ with compatibility - so for example FF11 (DirectX 8.1 I think) still works (mostly) on Windows 11, but DX12 won't work on W7

  • DirectX, OpenGL, Visual C++ Redist and many other support libraries in software programs typically require the same major version of the support libraries that they were shipped with.

    For DirectX, that major version is 9, 10, 11, 12. Any major library change has to be recompiled into the game by the original developer. (Or a very VERY dedicated modder with solid low level knowledge)

    Same goes for OpenGL, except I think they draw the line at the second number as well - 2.0, 3.0, 4.0, 4.1, 4.2, 4.3, 4.4.

    For VC++, these versions come in years - typically you'll see 2008, 2010, 2013, and the last version 2015-2022 is special. Programs written in the 2013 version or lower only require the latest version of that year to run. For the 2015-2022 library, they didn't change the major version spec so any program requiring 2015+ can (usually) just use the latest version installed.

    The one library that does weird things to this rule is DXVK and Intel's older DX9-on-12. These are translation shim libraries that allow the application to speak DX9 etc and translate it on the fly to the commands of a much more modern library - Vulkan in the case of DXVK or DX12 in Intel's case.

    Edited to remove a reference to 9-on-12 that I think I had backwards.