Skip Navigation

Posts
4
Comments
178
Joined
2 yr. ago

  • At work, if you have the option, consider using KeePassXC or similar software. That will give you a properly encrypted file with secrets and also password-manager features.

  • Google reminds me more and more of Microsoft of the 90s. That’s exactly the kind of compatibility breaking asinine move MS would do 30 years ago. Sigh…

  • What happens if you redirect all traffic to a sinkhole, rather than to 127.0.0.1? Do the devices still freak out when they talk to a web server which returns a 404? Just morbidly curious…

  • I think descriptive and useful error messages are OK to report as enhancements. They don’t have to be functional bugs.

  • Who knows anymore with these youngsters’ vernacular?

  • Classic! Love this clip!!!

  • Better late than never and I responded! Check your DM. :)

  • Huh? ZFS is not 100% userspace. You’re right that ZFS doesn’t need hardware RAID (in fact, it’s incompatible), but the standard OpenZFS implementation (unless you’re referring to the experimental FUSE-based one) does use kernelspace on both FreeBSD and Linux.

  • I have the whole series as DRM-free MP3. Let me know if you want it.

  • The Orion browser for iOS/iPadOS supports both Firefox and Chromium extensions, however, the support is quite buggy and limited. Nonetheless, a valiant effort by Orion devs.

  • Actually grub 0.x series had much more useful rescue shell tab completion than the latest release. You could easily list all boot devices, partitions, and even filesystems and their contents. All from the rescue shell. Consequently, you could boot into Linux and reinstall grub in the MBR to fix it. All that without using a boot CD/USB! Good luck doing that with the latest version of grub and UEFI.

    Also getting into the BIOS on legacy firmware was also very simple. On most machines it’s the three finger salute followed by either F1, Delete or rarely F11 or F12.

    The boot process was simple, and the BIOS had just one simple task: load and execute the first 512 bytes of the disk that was designated as the boot device. That’s it.

  • Ah yes, simplicity. MBR, with all its limitations had one killer feature: it was extremely simple.

    UEFI, as powerful as it is, is the opposite of simple. Many moving parts, so many potential failure points. Unfortunately, it seems like modern software is just that: more complex and prone to failure.

  • You should see/try socialist/communist toilet paper. Not only is it thin like this, it will also no-so-gently exfoliate your anus.

    Source: Cuban resorts and lived experience in the former Soviet Union during the 80’s and early 90’s.

  • Good point! I assumed the worst; but it’s possible the array is rebuilding or even already rebuilt and just needs to be mounted.

  • According to LocalSend docs these are the ports that need to be opened: Multicast (UDP) Port: 53317 Address: 224.0.0.167 HTTP (TCP) Port: 53317 AFAIK macOS firewall is app-based, at least in the GUI. So depending on how you installed LocalSend, you may have to add it to the list of allowed apps: https://support.apple.com/en-ca/guide/mac-help/mh34041/mac

    You may be able to add the ports above to /etc/pf.conf manually, but AFAIK messing with pf on macOS is not recommended.

    The other thing I wanted to ask is about Vallum. If you have it running on that Mac, would it not “take over” the macOS firewall?

  • Assuming you were using a Linux software RAID, you should be able to recover it.

    The first step would be to determine what kind of RAID you were using… btrfs, zfs, mdraid/dmraid/lvm… do you know what kind you set up?

    To start the process, try reconnecting your RAID disks to a working Linux machine, then try checking:

    1. The sudo lsblk command will help you get a list of all connected disks, sizes and partitions.
    2. The partition tables on the disks, eg: sudo fdisk -l /dev/sda (that’s a lowercase L and /dev/sda is your disk)
    3. Assuming you use a standard Linux software RAID, try sudo mdadm --examine /dev/sda1. If all goes well, the last command should give you an idea of what state the disk is in, what RAID level you had, etc.
    4. Next, I would try and see if mdadm can figure out how to reassemble the array, so try sudo mdadm --examine --scan. That should hopefully produce output with the name of the RAID array block device (eg, /dev/md0), RAID level and members of the RAID array (number of disks). Let me know what you discover…

    Note: if you used zfs of btrfs, do not do steps 3 and 4; they are MD RAID specific.