Skip Navigation

User banner
Posts
3
Comments
154
Joined
2 yr. ago

  • There's a reason Hello Games wrote their own engine for NMS. We all know that it was pretty bad gameplay-wise at launch, but under the hood NMS was (and still is) something of a technical marvel. No loading screens except for a disguised one when jumping between systems is quite impressive.

  • I'm pretty sure I can't even connect to my university's network without installing a custom certificate.

    What brainlet at Google thought this was a good idea?

  • unless it's freely available data (like a Linux distro).

    Tell that to my university. Got a nasty email because I... downloaded Fedora Silverblue 38 😭

  • Yeah, it's kinda drab in a lot of places. Which is a shame, because I really love the cassette futurism/NASApunk aesthetic.

  • I'm also a big fan of raw SQL. Most ORMs are fine for CRUD stuff, but the moment you want to start using the "relational" part of the database (which... that's the whole point) they start to irritate me. They also aren't free - if you're lucky, you pay at comptime (Rust's Diesel) but I think a lot of ORMs do everything at runtime via reflection and the like.

    For CRUD stuff, I usually just define some interface(s) that take a query and manually bind/extract struct fields. This definitely wouldn't scale, but it's fine when you only a handful of tables and it keeps the abstraction/performance tradeoff low.

  • Nice link - it's good to see some hard data when most of the discussion around this is based on anecdotes and technical trivia.

  • Personally, I prefer static linking. There's just something appealing about an all-in-one binary.

    It's also important to note that applications are rarely 100% one or the other. Full static linking is really only possible in the Linux (and BSD?) worlds thanks to syscall stability - on macOS and Windows, dynamically linking the local libc is the only good way to talk to the kernel.

    (There have been some attempts made to avoid this. Most famously, Go attempted to bypass linking libc on macOS in favor of raw syscalls... only to discover that when the kernel devs say "unstable," they mean it.)

  • NEVER statically link to libc, and probably not to libstdc++ either.

    This is really only true for glibc (because its design doesn't play nice with static linking) and whatever macOS/Windows have (no stable kernel interface, which Go famously found out the hard way.)

    Granted, most of the time those are what you're using, but there's plenty of cases where statically linking to MUSL libc makes your life a lot easier (Apline containers, distributing cross-distro binaries.)

  • and the fact it isn't verified on Steam Deck?

    Verification doesn't mean much - ProtonDB is where the deets are at.

    FWIW, it was plug-and-play for me on Linux.

  • I installed an optimized textures mod and instantly improved my performance by like... 20 frames, maybe more.

    I have an RX 6600 XT that can run Cyberpunk on high no problem. C'mon Bethesda, the game is really fun, but this is embarrassingly bad optimization.

  • It may nominally have more secure defaults than Firefox (although I doubt it's better in the areas that matter.)

    The problem is that the creators have demonstrated (by secretly injecting referral/affiliate links into URLs, and also by being crypto shills) that they are entirely untrustworthy. In a piece of software as security and privacy critical as a browser, such behavior is unacceptable.

  • browser that's based on Chromium and got caught editing referral links into URLs got a better privacy rating than Firefox

    You've been lied to lol

  • I never said that static typing stops all logic errors or fully proves program correctness, but go off I guess...

  • Ahh, the consequences of using the PDP-11 as your abstract machine.

    I find C fun in small doses, but if I ever had to scale up to an actual product, I'd quickly want to off myself from copy-pasting my vector implementation for every different type it needs to contain.

    (Of course, I could commit macro abuse to emulate generics, but... that's just asking for trouble.)

  • Plus, most statically typed languages either do type inference by default or let you opt in somehow.

    Even Java, which is probably the reason everyone hated static typing for the first decade of the century or so, now has var.

  • Typing can’t prove anything, either.

    Incorrect. Static typing can prove many things, depending on the quality of the type system.

    At the very least, it proves that your data is organized, stored, passed around and used in a logically valid and consistent manner. Make that proof impossible, and the compiler complains. (And with good reason - it doesn't matter how good your program logic is if you're feeding it bad data. Garbage in, garbage - or a runtime error - out.)

    In a dynamically typed language, your program logic still implicitly depends on that proof holding - it's just that you, the fallible human, has to make sure everything checks out. Python added type hints for precisely this reason.

    Additionally, with more advanced static type systems, it becomes possible to issue guarantees beyond simple type safety. Patterns like typestate (found in TS, Haskell and Rust, off the top of my head) can be used to make illegal states unrepresentable at compile time. Try to write to a closed file or make an invalid state machine transition? The compiler will see it and say no.

    It just creates bugs and crashes.

    In what universe are runtime errors turned compile-time errors a source of bugs and crashes? A statically-typed program won't blow up in production because some poor intern wasn't able to keep the implicit type bounds of every single function parameter in his head.

  • No matter how many unit tests or comments you write, it's impossible to "prove" type correctness in a dynamically typed language - and it will eventually blow up in your face when you have to refactor. There's a reason for the adage "testing can't prove the absence of bugs."

    People like static typing because it offers strong guarantees and eliminates entire classes of bullshit bugs, not because they're "weak minded."

  • Rust would be an excellent fit for the type of work you describe. Assuming I understand the specifics correctly, the regex and serde crates would make the parsing + converting pretty effortless and fast.

    The language itself also works really well for "data pipeline" type programs, thanks to its FP features/iterators and type system.

    With familiarity, can Rust's intuitiveness match Python's "from idea to deployment" speed?

    Yes and no. For experienced developers, the total time from "start" to "finished product" is probably going to be about the same in both. It's how that time is allocated that really distinguishes them.

    Rust is going to make you put in more work up front compared to Python: negotiating with the compiler, getting your types in order, that sort of thing. The benefit is that what comes out the other end tends to be baked all the way through - "if it compiles, it works."

    Python, being a dynamic scripting language, is going to make it easier to get a vertical slice or minimum viable product up and running. But when/if you scale up, you have to pay that time back fixing problems that Rust's static analysis could have caught at build time.

    TL;DR - Python is good for "throwing something together" or writing load-bearing scripts that do one simple thing really well. Rust is a slower start that shines as complexity and scale increase.

  • I don't really write Python, but I occasionally find myself having to use tools written in it.

    So Docker won't work (unless I do some scuffed mounting to let it access my working files, which is suboptimal regardless) and I can't be bothered to juggle venvs just to rip my Spotify playlists.