Skip Navigation

Posts
3
Comments
163
Joined
2 yr. ago

  • I did not realize nano implemented syntax highlighting!

  • Wayland replaces the older X protocol. It doesn't have to operate with older protocols. You might be thinking of XWayland which is a proxy that receives X API calls from apps written for X, and translates those to the Wayland API so that those apps can run under Wayland implementations. Window managers can optionally run XWayland, and many do. But as more apps are updated to work natively with Wayland, XWayland becomes less important, and might fade away someday.

    PipeWire replaces PulseAudio (the most popular sound server before PipeWire). Systems running PipeWire often run pipewire-pulse which does basically the same thing that XWayland does - it translates from the PulseAudio API to the PipeWire API. It's a technically optional, but realistically necessary compatibility layer that may become less relevant over time if apps tend to update to work with PipeWire natively.

    So no, both Wayland and PipeWire are capable of operating independently of other protocols.

  • Oh this is just the thing for playing bard, and casting "vicious mockery" several times per combat

  • The justification for invading Iraq was a claim that they were developing nuclear weapons. It was well known at the time that the evidence was flimsy, and that even if true it was a flimsy excuse for an invasion. The main piece of evidence was an intercepted shipment of aluminum tubes that were soon shown to have nothing to do with a nuclear program. (See https://en.m.wikipedia.org/wiki/Iraqi_aluminum_tubes). That one is not a conspiracy theory.

  • Everything gets done so mind-bogglingly slowly! There's always someone you have to talk, who has to talk to someone else. Bureaucratic processes often end up taking hours or days!! I knew to expect this - but experiencing it firsthand is a shock. How do people get anything done? They've computerized some things which helps. But every interface and every database schema has to be designed by a human which I'm told is expensive and takes even longer.

  • I like to use Obsidian for this kind of thing. It has tagging, and you can link notes and see the network of links in a visualizer. There's also a "canvas" feature that lets you lay out notes spatially in whatever way makes sense to you. I assume there is a web clipping plugin which could make it easy to grab the comment content and link at the same time.

  • NixOS puts your full system configuration in a portable set of files. You can easily reproduce the same configuration on another machine. I also like that instead of accumulating a growing list of packages that I don't remember why I installed I have package lists specified in files with comments, and split into modules that I can enable or disable.

    IMO NixOS works best when you also use Home Manager to apply the same benefits to your user app configurations and such. (OTOH you can use Home Manager to get those benefits without NixOS. But I like that I get consistency between the OS-level and user-level configurations, and that both use the same set of packages.) I use Home Manager to manage my list of installed packages, my dot files, Gnome settings, Firefox about:config settings, and so on.

    You might be installing packages imperatively with nix profile install or with nix env -i. If that's the case you're not going to see the full benefits of a declarative system in my opinion. I prefer to install packages by editing my Home Manager configuration and running home-manager switch.

    I like that NixOS + Home Manager automates stuff that I used to do by hand. A couple of the things that I do or have done are to,

    • test an experimental window manager, Niri
    • use Neovide (a GUI frontend for Neovim) with a custom patch to tweak font rendering

    Now I have that kind of stuff automated:

    • Since there was no packaging for Niri when I started trying it I wrote my own in my NixOS config with a NixOS module to set up a systemd unit to run it. Because Nix packages are effectively build scripts, whenever I update Nix automatically pulls the latest version of Niri and compiles it without me having to think about it anymore.
    • I use the Neovide package from nixpkgs with an override to compile with my custom patch. Like with Niri my configuration automatically gets the latest Neovide version and builds it with my patch when I update, and I don't have to think about it anymore. I use this overlay to do that:
     nix
        
    modifications = final: prev: {
      neovide = final.neovide.overrideAttrs (oldAttrs: {
        patches = (oldAttrs.patches or [ ]) ++ [ ./neovide-font-customization.patch ];
      });
    };
    
      

    You can see that I compile some things from source. That's fine on my desktop, but takes a while on my travel laptop. But I don't need to compile on my laptop because I can use Nix's binary cache feature. I push my NixOS and Home Manager configurations to Github, and I have Garnix build everything that I push. Garnix stores everything it builds in a binary cache. So when I pull my latest configuration version on my laptop it downloads binaries from that cache.

  • I'm also a PaperWM fan. For switching I mostly use spatial window-switching controls: Meta+ left/right to switch windows, page up/page down to switch workspaces. Plus I use Gnome overview's search-driven app finder, and Advanced Alt-Tab Switcher but only for its fuzzy search feature to switch to specific windows within an app.

    PaperWM has an option to hide windows in a "scratch" layer. I put chat and music programs there, and summon them with AATS.

    I have an ultrawide monitor, and I put a terminal and editor side-by-side in a ¼-¾ ratio. I set browser windows to ½ width. Those ratios let me see important parts of a browser window next to the editor if I slide the terminal out of view to partially expose a browser on the other side. Or I can move the terminal next to the browser and see both fully.

  • The first computer I used was (I think) a CP/M system that could run BASIC, and I used to use it to play Castle in the early '90s.

    The first computer of my own was a Gateway laptop for college in 2002. It was the first Wi-Fi device I laid hands on. I immediately set it up to play music to wake me up in the morning, and I listened to the fans running all night.

  • Yeah, that makes a lot of sense. If the thinking is that AI learning from others' works is analogous to humans learning from others' works then the logical conclusion is that AI is an independent creative, non-human entity. And there is precedent that works created by non-humans cannot be copyrighted. (I'm guessing this is what you are thinking, I just wanted to think it out for myself.)

    I've been thinking about this issue as two opposing viewpoints:

    The logic-in-a-vacuum viewpoint says that AI learning from others' works is analogous to humans learning from others works. If one is not restricted by copyright, neither should the other be.

    The pragmatic viewpoint says that AI imperils human creators, and it's beneficial to society to put restrictions on its use.

    I think historically that kind of pragmatic viewpoint has been steamrolled by the utility of a new technology. But maybe if AI work is not copyrightable that could help somewhat to mitigate screwing people over.

  • That sounds like a good learning project to me. I think there are two approaches you might take: web scraping, or an API client.

    My guess is that web scraping might be easier for getting started because scrapers are easy to set up, and you can find very good documentation. In that case I think Perl is a reasonable choice of language since you're familiar with it, and I believe it has good scraping libraries. Personally I would go with Typescript since I'm familiar with it, it's not hard (relatively speaking) to get started with, and I find static type checking helpful for guiding one to a correctly working program.

    OTOH if you opt to make a Lemmy API client I think the best language choices are Typescript or Rust because that's what Lemmy is written in. So you can import the existing API client code. Much as I love Rust, it has a steeper learning curve so I would suggest going with Typescript. The main difficulty with this option is that you might not find much documentation on how to write a custom Lemmy client.

    Whatever you choose I find it very helpful to set up LSP integration in vim for whatever language you use, especially if you're using a statically type-checked language. I'll be a snob for just a second and say that now that programming support has generally moved to the portable LSP model the difference between vim+LSP and an IDE is that the IDE has a worse editor and a worse integrated terminal.

  • I sometimes write a flake with those 4 lines of Nix code, and it comes out just messy enough that tbh I'm happier adding an input to handle that. But I recently learned that the nixpkgs flake exports the lib.* helpers through nixpkgs.lib (as opposed to nixpkgs.legacyPackages.${system}.lib) so you can call helpers before specifying a system. And nixpkgs.lib.genAttrs is kinda close enough to flake-utils.lib.eachSystem that it might make a better solution.

    Like where with flake-utils you would write,

     
        
    flake-utils.lib.eachSystem [ "x86_64-linux" "aarch64-darwin" ] (system:
    let
      pkgs = nixpkgs.legacyPackages.${system};
    in
    {
      devShells.default = pkgs.mkShell {
        nativeBuildInputs = with pkgs; [
          hello
        ];
      };
    })
    
      

    Instead you can use genAttrs,

     
        
    let
      forAllSystems = nixpkgs.lib.genAttrs [ "x86_64-linux" "aarch64-darwin" ];
      pkgs = forAllSystems (system:
        nixpkgs.legacyPackages.${system}
      );
    in
    {
      devShells = forAllSystems (system: {
        default = pkgs.${system}.mkShell {
          nativeBuildInputs = with pkgs.${system}; [
            hello
          ];
        };
      });
    }
    
      

    It's more verbose, but it makes the structure of outputs more transparent.

  • I've been reading about increasing unionization and strike activity, leading to better deals for large groups of workers. The industry-level negotiations we're already seeing are helpful in isolation; but that's also the kind of energy that can lead to economic reforms that have a real impact on quality of life. Workers seem like the little guys, until a lot of them are pulling in the same direction, and then suddenly their demands become existentially important.

    About a century-ish ago Americans were worse off than they are now. That led to desire for change, which led to decades of trust-busting, unionization, and regulation. We got things like weekends off, and a livable minimum wage. And not entirely unrelated, we also got national parks, the EPA, and endangered species preservation. We've back-slid a lot since those advances. But we can get them back, and push the needle even further next time. We did it before, we can do it again.

  • I pretty much always use list/iterator combinators (map, filter, flat_map, reduce), or recursion. I guess the choice is whether it is convenient to model the problem as an iterator. I think both options are safer than for loops because you avoid mutable variables.

    In nearly every case the performance difference between the strategies doesn't matter. If it does matter you can always change it once you've identified your bottlenecks through profiling. But if your language implements optimizations like tail call elimination to avoid stack build-up, or stream fusion / lazy iterators then you might not see performance benefits from a for loop anyway.

  • Allow me to share, Federated Wiki. I don't think it uses ActivityPub, but otherwise I think it's close to what you described. Instead of letting anyone edit articles it uses more of a fork & pull request model.

  • I think Picard was willing to sacrifice himself to save the kids. He's an officer who signed up for a risky job - they are not, and also they're kids. I think he thought that going with them would slow things down enough to add unacceptable risk for the kids. And they did end up spending a bunch of time cobbling together an apparatus to move Picard during which the lift could have fallen.

    When the kids refused to go maybe that changed Picard's calculation: the advantage of going without him diminishes if they use up time arguing. Or maybe it's TV writing.

    But maybe Picard wasn't certain that the lift would fall. Or maybe if he'd stayed he would have managed to pull out a Picard move to save himself at the last second - you know, the kind that's easier to do when there aren't kids watching. Or maybe, as far as he knew someone might rescue him in time. But yeah, he probably would have died, and the kids' mutiny was the only out that let him save himself while also trying to be noble.

  • when relays are blown
    when power reserves fail
    when life support is gone
    gravity plating's pull is relentless
    it will carry on

  • Debian unstable is not really unstable, but it's also not as stable as Ubuntu. I'm told that when bugs appear they are fixed fast.

    I ran Debian testing for years. That is a rolling release where package updates are a few weeks behind unstable. The delay gives unstable users time to hit bugs before they get into testing.

    When I wanted certain packages to be really up-to-date I would pin those select packages to unstable or to experimental. But I never tried running full unstable myself so I didn't get the experience to know whether that would be less trouble overall.