Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)FI
Posts
1
Comments
464
Joined
2 yr. ago

  • That can work in some cases, but it's usually not that great for first party projects where you want to be able to see and edit the code, and most package managers are OS or language specific so they don't work well with multi-language project or projects using a language that doesn't have a good package manager (SystemVerilog for example).

  • People always say this, and I have seen it happen occasionally. But in practice when it happens it's usually fairly obvious and not that confusing (especially with git blame).

    The frustration I've experienced from missing comments is several orders of magnitude more than the frustration I've experienced from outdated comments. I think mostly this is an excuse to be lazy and not write comments at all.

  • Well, git is for source control, not binary artefacts

    Only because it is bad at binary artefacts. There's no fundamental reason you shouldn't be able to put them in version control.

    It's not much of an argument to say "VCSes shouldn't be able to store binaries because they aren't good at it".

    What are your requirements? What do you need this for?

    Typically there's a third or first party project that I want to use in my project. Sometimes I want to be able to modify it too (soft fork).

    And why do you think everyone else needs the same?

    Because I've worked in at least 3 companies who want to do this. Nobody had a good solution. I've talked to colleagues that also worked in other companies that wanted this. Often they come up with their own hacky solutions (git subtree, git subrepo, Google's repo, etc. etc. - there are at least half a dozen of these tools).

    It’s quite possible you are doing it wrong.

    No offence, but your instinctive defence of Git and your instant leap to "you're holding it wrong" are a pretty dead giveaway that you haven't stopped to think about how it could be better.

  • Tbh these aren't things that are big issues with Git. The biggest issues I have are:

    • Storing large files. LFS is a shitty hack that barely works.
    • Integrating other repos. Git submodules are a buggy hack, and Git subtree is.. better... but still a hack that adds its own flaws.

    Fix those and it will take over Git in a very short time. Otherwise it's just going to hang around as a slightly nicer but niche alternative.

  • Yeah I was wondering that and I googled it but didn't find anything. The only thing you can obviously do is submit a tip.

    If you click on the author's name they've written over 1000 articles for Hackaday, but on their website they don't actually mention it at all as far as I can see. You can tell how odd they are from their website anyway...

  • Full of WTFs.

    My default development environment on Windows is the Linux-like MSYS2 environment

    I think this sets the tone nicely lol.

    it’s clear at this point already that Zig is a weakly-typed language

    Uhm... pretty sure it isn't.

    You can only use the zig command, which requires a special build file written in Zig, so you have to compile Zig to compile Zig, instead of using Make, CMake, Ninja, meson, etc. as is typical.

    Yeah who wants to just type zig build and have it work? Much better to deal with shitty Makefiles 🤦🏻‍♂️

    Ignoring the obvious memory safety red herring,

    Uhhh

    we can worryingly tell that it is also a weakly-typed language by the use of type inference

    Ok this guy can be safely ignored.

    the fact that the unsafe keyword is required to cooperate with C interfaces gives even great cause for concern

    ?

    Rather than dealing with this ‘cargo’ remote repository utility and reliving traumatic memories of remote artefact repositories with NodeJS, Java, etc., we’ll just copy the .rs files of the wrapper directly into the source folder of the project. It’s generally preferred to have dependencies in the source tree for security reasons unless you have some level of guarantee that the remote source will be available and always trustworthy.

    Lol ok... Ignore the official tool that works extremely well (and has official support for vendoring) and just copy files around and then is surprised that it doesn't work.

    Although you can use the rustc compiler directly, it provides an extremely limited interface compared to e.g. Clang and GCC

    That is a good thing.

    You get similar struggles with just getting the basic thing off the ground

    Uhm yeah if you ignore the tutorials and don't use the provided tools. It's literally cargo init; cargo run.

    What an idiot.

  • They mean measure first, then optimize.

    This is also bad advice. In fact I would bet money that nobody who says that actually always follows it.

    Really there are two things that can happen:

    1. You are trying to optimise performance. In this case you obviously measure using a profiler because that's by far the easiest way to find places that are slow in a program. It's not the only way though! This only really works for micro optimisations - you can't profile your way to architectural improvements. Nicholas Nethercote's posts about speeding up the Rust compiler are a great example of this.
    2. Writing new code. Almost nobody measures code while they're writing it. At best you'll have a CI benchmark (the Rust compiler has this). But while you're actually writing the code it's mostly find just to use your intuition. Preallocate vectors. Don't write O(N^2) code. Use HashSet etc. There are plenty of things that good programmers can be sure enough are the right way to do it that you don't need to constantly second guess yourself.
  • Do you realize how old assembly language is?

    Do you? These instructions were created in 2011.

    It predates hard disks by ten years and coincided with the invention of the transistor.

    I'm not sure what the very first assembly language has to do with RISC-V assembly?

  • flawed tests are worse than no tests

    I never said you should use flawed tests. You ask AI to write some tests. You READ THEM and probably tweak them a little. You think "this test is basic but better than nothing and it took me 30 seconds. You commit it.

  • AI is good at more than just generating stubs, filling in enum fields, etc. I wouldn't say it's good at stuff beyond just "boilerplate" - it's good at stuff that is not difficult but also isn't so regular that it's possible to automate using traditional tools like IDEs.

    Writing tests is a good example. It's not great at writing tests, but it is definitely better than the average developer when you take the probability of them writing tests in the first place into account.

    Another example would be writing good error context messages (e.g. .with_context() in Rust). Again, I could write better ones than it does. But like most developers there's a pretty high chance that I won't bother at all. You also can't automate this with an IDE.

    I'm not saying you have to use AI, but if you don't you're pointlessly slowing yourself down. That probably won't matter to lots of people - I mean I still see people wasting time searching for symbols instead of just using a proper IDE with go-to-definition.

  • Assembly is very simple (at least RISC-V assembly is which I mostly work with) but also very tedious to read. It doesn't help that the people who choose the instruction mnemonics have extremely poor taste - e.g. lb, lh, lw, ld instead of load8, load16, load32, load64. Or j instead of jump. Who needs to save characters that much?

    The over-abbreviation is some kind of weird flaw that hardware guys all have. I wondered if it comes from labelling pins on PCB silkscreens (MISO, CLK etc)... Or maybe they just have bad taste.

    I once worked on a chip that had nested acronyms.