Skip Navigation

Posts
2
Comments
48
Joined
3 yr. ago

  • I don't use Gentoo but I still frequent the Gentoo Wiki and pick apart packages because it's such a great resource for OpenRC.

  • But most importantly, it won’t work in the end. These scraping tech companies have much deeper pockets and can use specialized hardware that is much more efficient at solving these challenges than a normal web browser.

    A lot of people don't seem to be able to comprehend this. Even the most basic Server Hardware that these companies have access to is many times more powerful than the best Gaming PC you can get right now. And if things get too slow they can always just spin up more nodes, which is trivial to them. If anything, they could use this as an excuse to justify higher production costs, which would make resulting datasets and models more valuable.

    If this PoW crap becomes widespread it will only make the Internet more shitty and less usable for the average person in the long term. I despise the idea of running completely arbitrary computations just so some Web Admin somewhere can be relieved to know that the CPU spikes they see coming from their shitty NodeJS/Python Framework that generates all the HTML+CSS on-the-fly, does a couple of roundtrips and adds tens of lines of log on every single request, are maybe, hopefully caused by a real human and not a sophisticated web crawler.

    My theory is people like to glaze Anubis because it's linked to the general "Anti-AI" sentiment now (thanks to tech journalism), and also probably because its mascot character is an anime girl and the Developer/CEO of Techaro is a streamer/vtuber.

  • Exactly. All modern CPUs are so standardized that there is little reason to store all the data in ASCII text. It's so much faster and less complicated to just keep the raw binary on disk.

  • NVK doesn't support older cards though last time I checked. Pretty funny how I ended up with a stack of paperweights because NVidia dropped support and Nouveau/NVK can't get their shit together and instead of focusing on existing hardware they rather keep chasing the "latest and greatest".

  • AI? Look, I helped a friend fix a new install. It wasn’t Linux fault, it was a setting in the bios that needed to be changed. But the AI had them trying all sorts of things that were unrelated, and was never going to help. Use with a grain of salt.

    I have the same experience but sometimes it was even worse; Sometimes the AI would confidently recommend doing things that might lead to breakage. Personally I recommend against using AI to learn Linux. It's just not worth it and will only give new users a false impression of how things work on Linux. People are much better off reading documentation (actual documentation, not SEO slop on random websites) or asking for help in forums.

  • It has a green lock icon with the word "Private" next to it so it's fine bro.

  • arch-meson is a small wrapper script for meson:

     
        
    $ cat /usr/bin/arch-meson
    #!/bin/bash -ex
    # Highly opinionated wrapper for Arch Linux packaging
    
    exec meson setup \
      --prefix        /usr \
      --libexecdir    lib \
      --sbindir       bin \
      --buildtype     plain \
      --auto-features enabled \
      --wrap-mode     nodownload \
      -D              b_pie=true \
      -D              python.bytecompile=1 \
      "$@"
    
      
  • That's only been my experience with software that depends on many different libraries. And it's extra painful when you find out that it needs hyper specific versions of libraries that are older than the ones you have already installed. Rust is only painless because it just downloads all the right dependencies.

  • Some old software does use 8-Bit ASCII for special/locale specific characters. Also there is this Unicode hack where the last bit is used to determine if the byte is part of a multi-byte sequence.

  • Many km² of precious wasteland. Those commies don't hold anything dear.

  • COW filesystems like BTRFS/ZFS with btrbk/sanoid are great for this. Only the initial copy may take a while, but after that it only takes the delta between the source and the destination to synchronize. On my main Server I have the OS on a single drive with BTRFS and all the actual data lives on a 4 disk zpool in raidz2. I have cron jobs set up to do hourly snapshots on both and I keep about a week worth of history. The BTRFS one gets synced to an external drive every 24 hours, while the zpool gets synced to another external 4 disk zpool on a weekly basis.

  • Next their VPS expires and the instance disappears completely. (I hope not). Reminder to any sysadmin to do regular off-site backups in case something like that happens.

  • Anyone who runs their own DNS can just add a record to their config (for example in unbound):

     
        
    local-data-ptr: "37.187.73.130 hexbear.net"
    local-data: "hexbear.net A 37.187.73.130"
    
      
  • RIP. It will be a miracle if they can get that domain back.

  • Interesting feature, I had no idea. I just verified this with gcc and indeed the return register is always set to 0 before returning unless otherwise specified.

  • Permanently Deleted

    Jump
  • Unless your machine has error correcting memory. Then it will take literally forever.

  • Permanently Deleted

    Jump
  • Your CPU has big registers, so why not use them!

     
        
    #include <x86intrin.h>
    #include <stdio.h>
    
    static int increment_one(int input)
    {
        int __attribute__((aligned(32))) result[8]; 
        __m256i v = _mm256_set_epi32(0, 0, 0, 0, 0, 0, 1, input);
        v = (__m256i)_mm256_hadd_ps((__m256)v, (__m256)v);
        _mm256_store_si256((__m256i *)result, v);
        return *result;
    }
    
    int main(void)
    {
        int input = 19;
        printf("Input: %d, Incremented output: %d\n", input, increment_one(input));
        return 0;
    }
    
      
  • Imagine defending this guy. I will never understand people who like influencers.

  • I ran into the same issue a few weeks ago. In my case I didn't need real-time updates but I still needed to bulk insert data, which Postgres is terrible at (especially when dealing with tens of millions of rows). I just ended up using MariaDB (since that was my first exposure to SQL and I don't remember having issues with it) and turns out it can handle bulk inserts a lot better without slowing down much. I wish PostgreSQL was better.

  • GenZedong @lemmygrad.ml

    25 years after NATO bombing: How does Belgrade remember?

    GenZedong @lemmygrad.ml

    What is mine is mine, what is yours is also mine