Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)BA
Posts
1
Comments
301
Joined
2 yr. ago

  • No, I agree that a package manager or app store is indeed safer than either curl-bash or a random binary. But a lot of software is indeed installed via standalone binaries that have not been vetted by package manager teams, and most people don't use Nix. Even with a package manager like apt, there are still ways to distribute packages that aren't vetted by the central authority owning the package repo (e.g. for apt, that mechanism is PPAs). And when introducing a new piece of software, it's a lot easier to distribute to a wide audience by providing a standalone binary or an install script than to get it added to every platform's package manager.

  • I haven't worked on any teams where all members committed "every 24 hours", and there have always been some branches that live longer than we'd like (usually just unfinished work that got deprioritized but still kept as an eventual "todo"), but most teams I've worked on have indeed followed the basic pattern of only one long-lived branch, reviews and CI required prior to merge, and all feature-branches being short-lived.

  • Rust is extremely geared toward maintainability at the cost of other values such as learnability and iteration speed. Whether it's successful is of course somewhat a matter of opinion (at least until we figure out how to do good quantitative studies on software maintainability), and it is of course possible to write brittle Rust code. But it does make a lot of common errors (including ones Go facilitates) hard or impossible to replicate.

    It also strongly pushes toward specific types of abstractions and architectural decisions, which is pretty unique among mainstream languages, and is of course a large part of what critics dislike about it (since that's extremely limiting compared to the freedom most languages give you). But the ability for the compiler to influence the high-level design and code organization is a large part of what makes Rust uniquely maintainability-focused, at least in theory.

  • I guess I read his point more as being that it's effectively impossible for a license or CLA to distinguish "good" freeloaders from "bad" freeloaders, so it was inevitable that businesses would start doing license "rug-pulls" like the examples he gives.

  • I agree that a symbolic representation of the splatters would probably be more interesting. The whole point is that random character sequences are often valid Perl, though, so changing the generation method wouldn't change that aspect.

  • Perl programs are, by definition, text. So "paint splatters are valid Perl" implies that there's a mapping from paint splatters to text.

    Do you have a suggested mapping of paint splatters to text that would be more "accurate" than OCR? And do you really think it would result in fewer valid Perl programs?

  • I don't really understand the connection between the blog post and your comment. Could you expand on the connection between his stance against CLAs and your paraphrase about mega-corps and how we should "suck it up because of principles"?

  • ...

    Jump
  • But in the browser. My complaint with JavaScript is that it was effectively the only choice for in-browser logic up until WebAssembly was stabilized, and even now it requires JS glue code.

  • ...

    Jump
  • Not quite what you're asking for, but I wish Erlang had gotten popular before Java took off. I think that could have massively changed the course of "mainstream" languages. Maybe the JVM itself would have been BEAM-inspired. Heck, in an ideal world, the Netscape corporation and Brendan Eich would have created something based on Erlang/BEAM to ship with Navigator, instead of inventing JavaScript.

  • I agree so, so much; and I've been saying similar things for years.

    But I recognize that it's probably a hard problem. For one thing, auth is almost never going to work the same way in CI as it does locally.

    ...still, though, I feel like there could be some much nicer tooling here.

  • No, you leapt directly from what I said, which was relevant on its own, to an absurdly stronger claim.

    I didn't say that humans and AI are the same. I think the original comment, that modern AI is "smart enough to be tricked", is essentially true: not in the sense that humans are conscious of being "tricked", but in a similar way to how humans can be misled or can misunderstand a rule they're supposed to be following. That's certainly a property of the complexity of system, and the comment below it, to which I originally responded, seemed to imply that being "too complicated to have precise guidelines" somehow demonstrates that AI are not "smart". But of course "smart" entities, such as humans, share that exact property of being "too complicated to have precise guidelines", which was my point!

  • We can create rules and a human can understand if they are breaking them or not...

    So I take it you are not a lawyer, nor any sort of compliance specialist?

    They aren't thinking about it and deciding it's the right thing to do.

    That's almost certainly true; and I'm not trying to insinuate that AI is anywhere near true human-level intelligence yet. But it's certainly got some surprisingly similar behaviors.