Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)FI
Posts
1
Comments
464
Joined
2 yr. ago

  • The evidence is that I have tried writing Python/JavaScript with/without type hints and the difference was so stark that there's really no doubt in my mind.

    You can say "well I don't believe you".. in which case I'd encourage you to try it yourself (using a proper IDE and use Pyright; not Mypy)... But you can equally say "well I don't believe you" to scientific studies so it's not fundamentally different. There are plenty of scientific studies I don't believe and didn't believe (e.g. power poses).

  • Maybe “open question” was too strong of a term.

    Yeah I agree. Scientific studies are usually a higher standard of proof. (Though they can also be wrong - remember "power poses"?) So it's more like we're 80% sure instead of 90%.

  • then why isn’t it better to write instead everything in Haskell, which has a stronger type system than Rust?

    Because that's very far from the only difference between Haskell and Rust. It's other things that make Haskell a worse choice than Rust most of the time.

    You are right in that it's a spectrum from dynamically typed to simple static types (something like Java) to fancy static types (Haskell) then dependent types (Idris) and finally full on formal verification (Lean). And I agree that at some point it can become not worth the effort. But that point is pretty clearly after and mainstream statically typed language (Rust, Go, Typescript, Dart, Swift, Python, etc).

    In those languages and time you spend adding static types is easily paid back in not writing tests, debugging, writing docs, searching code, screwing up refactoring. Static types in these languages are a time saver overall.

  • No I disagree. There are some things that it's really infeasible to use the scientific method for. You simply can't do an experiment for everything.

    A good example is UBI. You can't do a proper experiment for it because that would involve finding two similar countries and making one use UBI for at least 100 years. Totally impossible.

    But that doesn't mean you just give up and say "well then we can't know anything at all about it".

    Or closer to programming: are comments a good idea, or should programming languages not support comments? Pretty obvious answer right? Where's the scientific study?

    Was default case fallthrough a mistake? Obviously yes. Did anyone ever do a study on it? No.

    You don't always need a scientific study to know things to a reasonable certainty and often you can't do that.

    That said I did see one really good study that shows Typescript catches about 15% of JavaScript bugs. So we don't have nothing.

  • I disagree. Pyright has pretty reasonable inference - typically it's only containers where you need to add explicit annotations and it feels like you shouldn't have to - and it is extremely reliable. Basically as good as Typescript.

    Mypy is trash though. Maybe you've only used that?

  • You'd be surprised. Every time I try to introduce static type hints to Python code at work there are some weirdos that think it's a bad idea.

    I think a lot of the time it's people using Vim or Emacs or whatever so they don't see half the benefits.

  • I think we still must call this an “open question”.

    Not sure I agree. I do think you're right - it's hard to prove these things because it's fundamentally hard to prove things involving people, and also because most of the advantages of static types are irrelevant for tiny programs which is what many studies use.

    But I don't think that means you can't use your judgement about it and come to a conclusion. Especially with languages like Python and Typescript that allow an any cop-out, it's hard to see how anyone could really conclude that they aren't better.

    Here's another example I came across recently: should bitwise & have lower precedence than == like it does in C? Experience has told us that the answer is definitely no, and virtually every modern language puts them the other way around. Is it an open question? No. Did anyone prove this? Also no.

  • Just being open source doesn't guarantee a project's survival. If Google were to abandon it the most likely outcome would be a community fork that gets 100th of the development manpower it gets now, and most developers would abandon the platform leading to it's effective death.

    But I also think it's unlikely Google will abandon it. It's actually quite good and quite popular now.

  • Definitely a high usefulness-to-complexity ratio. But IMO the core advantage of Make is that most people already know it and have it installed (except on Windows).

    By the time you need something complex enough that Make can't handle it (e.g. if you get into recursive Make) then you're better off using something like Bazel or Buck2 which also solves a bunch of other builds system problems (missing dependencies, early cut-off, remote builds, etc.).

    However this does sound very useful for wrangling lots of other broken build systems - I can totally see why Buildroot are looking at it.

    I recently tried to create a basic Linux system from scratch (OpenSBI + Linux + Busybox + ...) which is basically what Buildroot does, and it's a right pain because there are dependencies between different build systems, some of them don't actually rebuild properly when dependencies change (cough OpenSBI)... This feels like it could cajole them into something that actually works.

  • Ah yeah I forgot about namespaces. I don't think they're a popular feature.

    The other two only generate code for backwards compatibility. When targeting the latest JavaScript versions they don't generate anything.

    Ok decorators are technically still only a proposal so they're slightly jumping the gun there, but the point remains.

  • No they don't. Enums are actually unique in being the only Typescript feature that requires code gen, and they consider that to have been a mistake.

    In any case that's not the cause of the difference here.

  • Private or obscure ones I guess.

    Real-world (macro) benchmarks are at least harder to game, e.g. how long does it take to launch chrome and open Gmail? That's actually a useful task so if you speed it up, great!

    Also these benchmarks are particularly easy to game because it's the actual benchmark itself that gets gamed (i.e. the code for each language); not the thing you are trying to measure with the benchmark (the compilers). Usually the benchmark is fixed and it's the targets that contort themselves to it, which is at least a little harder.

    For example some of the benchmarks for language X literally just call into C libraries to do the work.

  • Their measurements include invocation of the interpreter. And parsing TS involves bigger overhead than parsing JS.

    But TS is compiled to JS so it's the same interpreter in both cases. If they're including the time for tsc in their benchmark then that's an even bigger WTF.

  • Ah this ancient nonsense. Typescript and JavaScript get different results!

    It's all based on

    https://en.wikipedia.org/wiki/The_Computer_Language_Benchmarks_Game

    Microbenchmarks which are heavily gamed. Though in fairness the overall results are fairly reasonable.

    Still I don't think this "energy efficiency" result is worth talking about. Faster languages are more energy efficient. Who new?

    Edit: this also has some hilarious visualisation WTFs - using dendograms for performance figures (figures 4-6)! Why on earth do figures 7-12 include line graphs?

  • Your comment doesn't account for the fact that LLMs can generalise. Often not very well but they can produce outputs for inputs not seen in their training sets. Otherwise what would be the point?

    You would not ask a piece of cardboard so solve a math problem, would you?

    Uhhh you know LLMs can solve quite complex maths problems? Including novel ones.

  • Ah yes the old pointless vague anecdote.

    If your argument is "LLMs can't do useful work", and then I say "no, I've used them to do useful work many times" how is that a pointless vague anecdote? It's a direct proof that you're wrong.

    Promoting pseudo-science.

    Sorry what? This is bizarre.