Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)SQ
Posts
4
Comments
585
Joined
4 mo. ago

  • Who pissed in your coffee?

    Sure you can write some script to interpret the data, but then you need to write an extra script that you need to run any time you step through the code, or whenever you want to look at the data when it's stored or transferred.

    But I guess you have never worked on an actually big project, so how would you know?

    I guess you aren't entirely wrong here. If nobody other than you ever uses your program and nobody other than you ever looks at the code, readability really doesn't matter and thus you can microoptimize everything into illegibility. But don't extrapolate from your hobby coding to actual projectes.

  • The issue here is that human intelligence and computer intelligence work completely different and things that are easy for one are hard for the other.

    Because of that, measures of intelligence don't really work across humans and computers and it's really easy to misjudge which milestones are meaningful and which aren't.

    For example, it's super hard for a human to perform 100 additions within a second, and a human who could do that would be perceived as absolutely super human. But for a computer that's ridiculously easy. While on the other hand there are things a child can do that were impossible for computers just a few years ago (e.g. reckognizing a bird).

    (Relevant, if slightly outdated, XKCD: https://xkcd.com/1425/)

    For humans, playing high-level chess is really hard, so we arbitrarily chose it as a measure of intelligence: "Only very intelligent people can beat Kasparov". So we figured that a computer being able to do that task must be intelligent too. Turns out that chess greatly benefits from large memory and fast-but-simple calculations, two things computers are really, really good at and humans are not.

    And it turns out that, contrary to what many people believed, chess doesn't actually require any generally intelligent code at all. In fact, a more general approach (like LLMs) actually performs much, much worse at specific tasks like chess, as exemplified by some chess program for the Atari beating one LLM after another.

  • It's human-readable enough for debugging. You might not be able to read whether a person look left, but you can read which field is null or missing or wildly out of range. You can also read if a value is duplicated when it shouldn't be.

    Human-readable is primarily about the structure and less about the data being human readable.

  • Technically, JSON enforces a specific numeric precision by enforcing that numbers are stored as JS-compatible floating point numbers with its associated precision.

    Other than that, the best way to go if you want to have a specific precision is to cast to string before serialisation.

  • I see what you are saying. But if you aren't using a cryptographic hash function then collisions don't matter in your use case anyway, otherwise you'd be using a cryptographic hash function.

    For example, you'd use a non-cryptographic hash function for a hashmap. While collisions aren't exactly desireable in that use case, they also aren't bad and in fact, the whole process is designed with them in mind. And it doesn't matter at all that the distribution might not be perfect.

    So when we are talking about a context where collisions matter, there's no question whether you should use a cryptographic hash or not.

  • This is about cryptographic hashing functions (to be fair, could have spelled that out in my prior comment, but in general when someone talks about anything security relevant in conjunction with hashing, they always mean cryptographic hashing functions).

    MD5 is not a cryptographic hashing function for exactly these reasons.

    Also, the example you gave in your original comment wasn't actually about distribution but about symbol space.

    By multiplying by four (and I guess you implicitly meant that the bit length of the hash stays the same thus dropping two bits of information) you are reducing the number of possible hashes by a factor of four, because now 3/4 of all potential hashes can't happen any more. So sure, if your 64bit hash is actually only a 62bit hash that just includes two constant 0 bits, then of course you have to calculate the collision chance for 62bits and not 64bits.

    But if all hashes are still possible, and only the distribution isn't perfectly even (like is the case with MD5), then the average chance for collisions doesn't change at all. You have some hashes where collisions are more likely, but they are perfectly balanced with hashes where collisions are less likely.

  • Thanks for the summary!

    Yeah, in Python each . is a dictionary lookup. The cost of having a dynamic language where the compiler can do pretty much no optimizations (and yes, Python does have a compiler).

    In static languages these lookups can be collapsed to a single pointer address by the compiler.

  • The power consumption would be 510^62 Wh.

    The sun outputs 3.91026 W. If you captured all that energy with 100% efficiency, you would need 1.3\*1036 hours or roughly 110^22 times the age of the universe to collect enough energy.

    That's incidentally roughly the estimated number of stars in the universe.

    So if you put a dyson sphere around every star in the universe, right after the big bang (ignoring that stars didn't form instantly after the big bang) and you ran them until today, then you'd have just about enough energy to crack one wallet with current tech.

  • Considering that you'd need a paradigm-breaking revolutionary and incredibly expensive device to do so, I'd find it hard to believe that you could stay under the radar with it.

    What I'd expect to happen is that some big corporation and/or university manages to build a quantum computer capable of breaking 256bit encryption, and quite instantly after the announcement bitcoin will tank into nothingness or will change the algorithm to something quantum-computer safe. Well before some shady actor will get their hands on a quantum computer to crack wallets.

  • Bitcoin private keys are 256 bit long. That means, there are 115792089237316195423570985008687907853269984665640564039457584007913129639936 (1.1510^77) possible private keys.

    Say you are using a bitcoin miner that's roughly 4x as fast as the curretly fastest one at 1PH/s (11015), they you'll need roughly 1\*1062 seconds or 310^54 years.

    Lets say you got a million of these miners, then you are down to 31048 years, or 2\*1038 times as long as the universe has existed.

    I was going to calculate how much electricity this would consume and how expensive it would be, but the answer to that is plainly "too much to imagine".

  • At that point though the whole concept of bitcoins will be moot. If quantum computers can crack lost wallets they can also crack active wallets, and at that point there's no reason to buy bitcoin at all, which will tank the value of bitcoin making it mostly not worthwhile to crack wallets.

    So if we get to that point, there will be one proof-of-concept wallet crack, and instantly after that bitcoin will cease to exist in any relevant fashion.

  • The US is what happens if you push enshitification on a national scale for too long.

    You start with a genuenly good setup. Then you reduce investment, cut corners and reduce spending as much as you can to squeeze out every possible bit of profit.

    That works for a while, because you can coast along on prior investments, but at a certain point the game is over and all you are left with is a tide of shit that nobody wants anymore.

    To even consider that projects carried mostly by volunteers and underpaid staff making a meager living off donations can rival the products of the world's largest software corporations is laughable, and yet here we are.

    And not only because FOSS alternatives have gotten better over the years, but mostly because big CSS projects have tanked in quality.