Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)HA
Posts
0
Comments
33
Joined
5 mo. ago

  • But going peacefully on the streets is only successful if the government you want to send a message to is listening, i.e. if it either cares for their citizens or is in any way rational.

    I hope I'm wrong here, but I can't see anything changing for the better in your country. It currently looks like 70 million people trying to talk through a knife fight.

  • The only field I see LLMs enhancing productivity of competent developers is front end stuff where you really have to write a lot of bloat.

    In every other scenario software developers who know what they're doing the simple or repetitive things are mostly solved by writing a fucking function, class or library. In today's world developers are mostly busy designing and implementing rather complex systems or managing legacy code, where LLMs are completely useless.

    We're developing measurement systems and data analysis tools for the automotive industry and we tried several LLMs extensively in our daily business. Not a single developer was happy with the results.

  • Why do I still see articles with the headline "X claims Israel is committing genocide"? "Claims"? Really? And how is that "news"? If we can't get over pretending it's not entirely clear nothing will change.

  • But it's 2⁵² addresses for each star in the observable universe. Or in other words, if every star in the observable universe has a planet in the habitable zone, each of them got 2²⁰ more IPs than there are IPv4 addresses.

  • Ye but that would limit the use cases to very few. Most of the time you compress data to either transfer it to a different system or to store it for some time, in both cases you wouldn't want to be limited to the exact same LLM. Which leaves us with almost no use case.

    I mean... cool research... kinda.... but pretty useless.

  • Ok so the article is very vague about what's actually done. But as I understand it the "understood content" is transmitted and the original data reconstructed from that.

    If that's the case I'm highly skeptical about the "losslessness" or that the output is exactly the input.

    But there are more things to consider like de-/compression speed and compatibility. I would guess it's pretty hard to reconstruct data with a different LLM or even a newer version of the same one, so you have to make sure you decompress your data some years later with a compatible LLM.

    And when it comes to speed I doubt it's nearly as fast as using zlib (which is neither the fastest nor the best compressing...).

    And all that for a high risk of bricked data.