Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)DA
Posts
0
Comments
241
Joined
2 yr. ago

  • That's why I included the clause "if Firefox starts to gain too much traction". I agree that currently Firefox is no threat to Google, but if it starts to become, they will strike hard against it.

    And unfortunately, antitrust investigations do little against the Titans of Big Tech, just look at what it (didn't) do to Microsoft.

  • Unfortunately, it's not like it would realistically change the monopoly Google has over the internet. The greatest financial backer of Mozilla is Alphabet and if Firefox starts to gain too much traction, they will simply axe Mozilla and unless they manage to get another backer fast, Alphabet will have THE monopoly over the Internet.

    Don't get me wrong, I'm doing my part and using Firefox (when it doesn't constantly crash), but Alphabet's holding way too many strings currently for any change to happen.

  • I've seen it, and it looks quite nice. Might give it a try if I ever get into game making. But some project that want to harness the full power of systems like Lumen, Niagara, Megascans, etc, don't have much of an alternative.

  • No need to be sorry, I am well aware I can be wrong, and I prefer to learn something new than being bashed for being wrong.

    Maybe I phrased it in a way different than I thought about it. I didn't mean to claim that Shannon-Fano or Huffman are THE most efficient ways of doing it, but rather that comparing it to the massive overhead of running a LLM to compress a file, the current methods are way more resource efficient, even one as obsolete as Shannon-Fano codes.

    I should probably have mentioned an algorithm like LZMA, or gzip, like you did.

  • Correct me if I'm wrong, but don't algorithms like Huffman or even Shannon-Fano code with blocks already pack the files as efficiently as possible? It's impossible to compress a file beyond it's entropy, and those algorithms get pretty damn close to it.

  • Say what you want about it, but it will not go down without using everything in the vicinity as a weapon if ammo runs out. While others go down, they will be kamikazeing themselves to get the job done.

  • Well, we know that the simple fact of observing an event changes it (see the Double Slit experiment), so consciousness has to have some kind of link to reality itself, no?

    We currently do not know what consciousness even is exactly, and we know only about the human consciousness, but there can be other degrees of consciousness within other particles in the universe.

    And even if current-day experiments disprove something, that doesn't mean it will in the future, just like before Einstein's laws of relativity proved that gravity bends spacetime and that it is relative according to the point of observation.

    And I'm sure people that study neuroscience ask this same question from time to time. It's a scientist's duty to find the factual truth about things, even if they disprove everything they know so far. We can't rule out something as impossible just because we haven't observed it yet, as it would directly contradict the scientific method, and therefore cease to be science.