Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)AA
anti-idpol action @ pkill @programming.dev
Posts
5
Comments
360
Joined
2 yr. ago

  • In short when a program runs, it allocates memory for the data it is using. Then the garbage collector which you can think of a 'program' (though not in a strict sense, just a part of the runtime), takes care of freeing the memory which is no longer needed. In languages with manual memory management, like C or C++ it is up to the programmer to properly (de)allocate memory, which might result in issues like dangling pointers (references to already freed memory, which might cause unpredictable behavior and compromise your whole system's security) or memory leaks (gradually increasing memory usage over time in absence of user action that would prompt that, like e.g. opening more and more browser tabs, which is also partly why in the past you've often had to restart your PC every once in a while to free some trashed memory. In Go most of the stuff is done by GC and only code that uses the unsafe package or relies on external resources, like reading a file or database connection needs to have programmatic termination of access to these (usually by calling something like defer db.Close() ).

    Also Go is a nice balance between low and high level, one of such examples is the use of pointers. They might be complicated for most beginner coders, but in reality there is seldom any use of uintptr, double pointers and even slice (list) pointers, but it has a tremendous performance amount since when you do some operation, especially loops where without a pointer you would copy the data you're iterating through by value instead of just it's memory address.

    Which is especially important with large sets of data, where 160 bytes might seem miniscule with one call, but if you loop over 1 million records, that's 160 MB (for example in some database used by municipal authorities of a large city). That's one of the reasons some databases like CockroachDB were written in Go.

  • manjaro's so notorious for it's bad mainteinance it even gained a website for tracking the last time they screwed something up. I'm glad I haven't seen anyone recommend that shitty distro in a while. Tbh nix (the package manager) has proven to provide excellent stability no matter whether I used it on macOS or Artix. It's been more than a year since I had to reinstall my OS or generally deal with large scale system breakage. Also have grub set up to provide both a LTS and edge kernel, for example. The last installation that broke for me was well over a year ago, it was OpenSUSE Tumbleweed and it also used btrfs. Which is a pretty nice FS if set up correctly, but by default it's quite slow. Then I switched to Alpine since I've been using it on a VPS for a couple of months earlier and absolutely loved it. I don't count fucking up the configuration files as system breaking because I assume the consensus to be that we refer to unexpected issues here. Getting rid of GDM, glibc, bash, systemd, coreutils and similar bloat not only speeds up your system, it also improves it's security and stability.

    I wonder when I'll become so deranged to start tinkering around with BSDs and Gentoo, it'll be pretty funny if instead of wasting my time gaming I'll waste it hacking my system to improve it's responsiveness by 1-2% lmao

  • Pop OS uses archaic software packages. For me Alpine has a good balance between stability and new stuff (no graphical installer though), on the same note my gaming daily driver, Artix, which is based on Arch never broke but that might be due to the fact I installed a lot of my software using nix, cargo and flatpak.

  • Also one really good practice from pre-Copilot era still holds, that many new users of copilot, my past self included might forget: don't write a single line of code without knowing it's purpose. Another thing is that while it can save a lot of time on boilerplate, you need to stop and think whenever it's using your current buffer's contents to generate several lines of very similar code whether it wouldn't be wiser to extract the repetitive code into a method. Because while it's usually algorithmically correct, good design still remains largely up to humans.

  • It really depends

    1. How widely used is the thing you want to use. For example it hallucinated caddyfile keys when I asked it about setting up early data support for a reverse proxy to a docker container, luckily caddy docs are really good and it was an issue with the framework I use anyway so I had to look it up myself after all. Ig it'd have been more likely to do this right at first attempt if say I wanted it to achieve that using Express with Nginx. For even less popular technology like Elixir it's borderline useless beyond very high level concepts than can apply to any programming language.
    2. How well documented it is, also more widespread use can sometimes make up for bad docs.
    3. How much has changed since it was trained. Also it might still include deprecated methods since it doesn't discriminate between official docs and other sources like SO in it's training data.

    If you want to avoid these issues I'd suggest to first read the docs, then look up stack overflow or likely name of a function you need to write on grep.app, then use a LLM as your last resort. Good for prototyping usually, less so for more specific things.