Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)HU
Posts
0
Comments
1,131
Joined
2 yr. ago

  • but it isn’t so clear cut

    It's demonstrably several orders of magnitude less complex. That's mathematically clear cut.

    Where is the cutoff on complexity required?

    Philosophical question without an answer - We do know that it's nowhere near the complexity of the brain.

    both our brains and most complex AI are pretty much black boxes.

    There are many things we cannot directly interrogate which we can still describe.

    It’s impossible to say this system we know vanishingly little about is/isn’t dundamentally the same as this system we know vanishingly little about, just on a differentscale

    It's entirely possible to say that because we know the fundamental structures of each, even if we don't map the entirety of eithers complexity. We know they're fundamentally different - Their basic behaviors are fundamentally different. That's what fundamentals are.

    The first AGI will likely still have most people saying the same things about it, “it isn’t complex enough to approach a human brain.”

    Speculation but entirely possible. We're nowhere near that though. There's nothing even approaching intelligence in LLMs. We've never seen emergent behavior or evidence of an id or ego. There's no ongoing thought processes, no rationality - because that's not what an LLM is. An LLM is a static model of raw text inputs and the statistical association thereof. Any "knowledge" encoded in an LLM exists entirely in the encoding - It cannot and will not ever generate anything that wasn't programmed into it.

    It's possible that an LLM might represent a single, tiny, module of AGI in the future. But that module will be no more the AGI itself than you are your cerebellum.

    But it doesn’t need to equal a brain to still be intelligent.

    First thing I think we agree on.

  • This is exactly what I'm talking about when I argue with people who insist that an LLM is super complex and totally is a thinking machine just like us.

    It's nowhere near the complexity of the human brain. We are several orders of magnitude more complex than the largest LLMs, and our complexity changes with each pulse of thought.

    The brain is amazing. This is such a cool image.

  • Yes in that there is a new battlepass every month with new guns and armor to unlock.

    No in that you can earn the battlepass with in game currency and never have to give them more than the $40 that the game cost to have a ton of fun.

    Beyond that - They've taken live service games to a better place. There's an ongoing galactic campaign and the individual missions you run contribute to galactic objectives which have real consequences in game. They have a dedicated "Game Master" like a DND Dungeon Master who decides when to release new units, what happens when we succeed and how to punish failure.

    And most importantly: It's a lot of fun.

  • Lamont v. Postmaster General(1965)

    Supreme Court ruled that publishing propaganda in America is free speech. You're not allowed to interfere with an American's access to propaganda

    Justice Brennan made explicit what had been implicit in the majority opinion, declaring that “the right to receive publications is . . . a fundamental right,” the protection of which is “necessary to make the express guarantees [of the First Amendment] fully meaningful.”

  • They won't be back - they're not leaving.

    But that phrase also seems like pretty normal rationalizing in an apology.

    If I had to bet it was mostly steam issuing refunds and pulling the game in more than 100 countries that changed their mind.