Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)DI
DigitalDilemma @ digdilem @lemmy.ml
Posts
2
Comments
552
Joined
2 yr. ago

  • micro looks very impressive. I'm too invested in vi to move away from that, but it's great to see alternatives, especially those focused on being easy to use (like jed)

    Only weird thing from the cap I saw was that you need to edit a json file to change keybindings - doesn't that go against the 'easy to use' edict, or is that something that's planned to be changed?

  • This is exactly why I never buy Early Access games. The biggest thrill for me is starting a new game, and if that isn't as good as it can possibly be, then that opportunity has been wasted.

    Sure, it /may/ get better at some undefined point in the future, but there's just so many games out there that are complete, and won't require re-visiting at some point because they got better. Once that first play is gone, it's gone.

  • And it was a good design - it's universal (aha) adoption proves that.

    Those of us old enough to remember the pain of using 9 and 25 pin serial leads and having to manually set baud rate and protocols, along with LPT and external SCSI and manufacturer specific sockets probably agree this was a problem that needed solving, and USB did do that.

  • Thanks, I've not heard of that, it sounds like it's worth a look.

    I don't think the tunnel would complicate blocking via the cloudflare api, but there is a limit on the number of IPs you can ban that way, so some expiry rules are necessary.

  • Fail2ban is something I've used for years - in fact it was working on these very sites before I decided to dockerise them, but find it a lot less simple in this application for a couple of reasons:

    The logs are in the docker containers. Yes, I could get them squirting to a central logging serverbut that's a chunk of overhead for a home system. (I've done that before, so it is possible, just extra time)

    And getting the real IP through from cloudlfare. Yes, CF passes headers with it in, and haproxy can forward that as well with a bit of tweaking. But not every docker container for serving webpages (notably the phpbb one) will correctly log the source IP even when passed through from Haproxy as the forwarded-ip, instead showing the IP of the proxy. I've other containers that do display it, and it can obviously be done, but I'm not clear yet why it's inconsistent. Without that, there's no blocking.

    And... You can use the cloudflare IP to block IPs, but there's a fixed limit on the free accounts. When I set this up before with native webservers and blocked malicious url scanning bots, then using the api to block them - I reached that limit within a couple of days. I don't think there's automatic expiry, so I'd need to find or build a tool that manages the blocklist remotely. (Or use haproxy to block and accept the overhead)

    It's probably where I should go next.

    And yes - you're right about scripting. Automation is absolutely how I like to do things. But so many problems only become clear retrospectively.

  • Obesity is increasingly a problem in low- and middle-income countries.

    Isn't that always going to be the case, regardless of ingredient adjustment? It feels like people who have had very little food will tend towards over-compensating during times of glut - perhaps not so much the generation directly affected, but the care they give to next generations.

    As an example vaguely related but less extreme; I was born in 1970 in England to a lower middle-class family. My parents were wartime and post-war babies who had experienced rationing and as a result, I have very strong recollections of being made to "clear your plate" before I could leave the table. (Ironically given this topic, the "there are starving children in Africa who would like that" line was given quite often)

    Wasting food was the absolute highest sin I could commit and that's stayed with me to this day.

  • This is a common thing one needs to do. Not all linux gui tools are perfect, and some calculate number differently (1000 vs 1024 soon mounts up to big differences). Also, if you're running as a user, you're not going to be seeing all the files.

    Here's how I do it as a sysadmin:

    As root, run:

    du /* -shc |sort -h

    "disk usage for all files in root, displaying a summary instead of listing all sub-files, and human-readable numbers, with a total. Then sort the results so that the largest are at the bottom"

    Takes a while (many minutes, up to hours or days if you've slow disks, many files or remote filesystems) to run on most systems and there's no output until it finishes because it's piping to sort. You can speed it up by omitting the "|sort -h" bit, and you'll get summaries when each top level dir is checked, but you won't have a nice sorted output.

    You'll probably get some permission errors when it goes through /proc or /dev

    You can be more targetted by picking some of the common places, like /var - here's mine from a debian system, takes a couple of seconds. I'll often start with /var as it's a common place for systems to start filling up along with /home.

     
        
    root@scrofula:~# du /var/* -shc |sort -h
    0       /var/lock
    0       /var/run
    4.0K    /var/local
    4.0K    /var/mail
    4.0K    /var/opt
    168K    /var/tmp
    4.1M    /var/spool
    5.5M    /var/backups
    781M    /var/log
    787M    /var/cache
    8.3G    /var/www
    36G     /var/lib
    46G     total
    
      

    Here we can see /var/lib has a lot of stuff in it, so we can look into that with du /var/lib/* -shc|sort -h - it turns out mine has some big databases in /var/lib/mysql and a bunch of docker stuff in /var/lib/docker, not surprising.

    Sometimes you just won't be able to tally what you're seeing with what you're using. Often that might be due to a locked file having been deleted or truncated, but the lock's still preventing the OS from seeing the recovered space. That generally sorts itself out with various timeouts, but you can try and find it with lsof, or if the machine isn't doing much, a quick reboot.

  • I think bus factor would be a lot easier to cope with than a slowly progressing, semi-abandoned project and a White Knight saviour.

    In a complete loss of a sole maintainer, then it should be possible to fork and continue a project. That does require a number of things, not least a reliable person who understands the codebase and is willing to undertake it. Then the distros need to approve and change potentially thousands of packages that rely upon the project as a dependency.

    Maybe, before a library or any software gets accepted into a distro, that distro does more due diligence to ensure it's a sustainable project and meets requirements like a solid ownership?

    The inherited debt from existing projects would be massive, and perhaps this is largely covered already - I've never tried to get a distro to accept my software.

    Nothing I've seen would completely avoid risk. Blackmail upon an existing developer is not impossible to imagine. Even in this case, perhaps the new developer in xz started with pure intentions and they got personally compromised later? (I don't seriously think that is the case here though - this feels very much state sponsored and very well planned)

    It's good we're asking these questions. None of them are new, but the importance is ever increasing.

  • In what way did I bend your logic? I found your logic quite twisted to start with, and don't think I did alter it further.

    Also - not constructive? But you're the one that's being negative. I'm merely trying to point out that you'll have a very hard job not relying on foss as it stands today. Where we go from here is a much bigger question, but we've all got very used to having free software and, as I said, even if we all start paying huge amounts of money for the alternative, that doesn't mean it'll be safer. In fact, I rather suspect it'll be less safe, as issues like this then have a commercial interest in not disclosing security problems. (As evidenced already in numerous commercial security exploits that were known and hidden)

  • Good luck with that.

    Commercial and closed source software is no safer, and may even be using the same foss third-party libs under the hood that you're trying to avoid. Just because foss licences generally require you to disclose you're using them, it doesn't mean that's what actually happens.

    And even if, by some miracle, they have a unique codebase - how secure is that? Even if an attacker can't reach the source, they can still locate exploits and develop successful attacks against it.

    At its core, all software relies upon trust. I don't know the answer to this, and we'll be here again soon enough.