Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)YE
Posts
1
Comments
426
Joined
1 yr. ago

  • Although the movies are better than the books, the actual story is spectacularly awful. Have you ever had someone try to explain it to you? It’s unhinged. And the book is so poorly written from a literary perspective. Dune is Twilight for Gen-X’ers.

    Frank Herbert is such a narcissist he actually accused Star Wars of ripping him off.

  • Since you missed it, let me explain the joke. Ahem: Kylie Jenner is a despicable billionaire who exploits poor girls for money. Touching anything that she has touched, including a hot guy, is disgusting. That’s the joke.

  • Why do you expect an unthinking, non-deliberative zombie process to know what you mean by “empower humanity”? There are facts about what is GOOD and what is BAD that can only be grasped through subjective experience.

    When you tell it to reduce harm, how do you know it won’t undertake a course of eugenics? How do you know it won’t see fit that people like you, by virtue of your stupidity, are culled or sterilized?

  • if I'm wrong list a task that a conscious being can do that an unconscious one is unable to accomplish.

    These have been listed repeatedly: love, think, understand, contemplate, discover, aspire, lead, philosophize, etc.

    There are, in fact, very few interesting or important things that a non-thinking entity can do. It can make toast. It can do calculations. It can design highways. It can cure cancer. It can probably fold clothes. None of this shit is particularly exciting. Just more machines doing what they’re told. We want a machine that can tell us what to do, instead. That’s AGI. We don’t know how to build such a machine, at least given our current understanding of mathematical logic, theoretical computer science, and human cognition.

  • Feed it the entire internet and let it figure out what humans value

    There are theorems in mathematical logic that tell us this is literally impossible. Also common sense.

    And LLMs are notoriously stupid. Why would you offer them as an example?

    I keep coming back to this: what we were discussing in this thread is the creation of an actual mind, not a zombie illusion. You’re welcome to make your half-assed malfunctional zombie LLM machine to do menial or tedious uncreative statistical tasks. I’m not against it. That’s just not what interests me.

    Sooner or later humans will create real artificial minds. Right now, though, we don’t know how to do that. Oh well.

    https://introtcs.org/public/index.html

  • we're talking about something where nobody can tell the difference, not where it's difficult.

    You’re missing the point. The existence of black holes was predicted long before anyone had any idea how to identify them. For many years, it was impossible. Does that mean black holes don’t matter? That we shouldn’t have contemplated their existence?

    Seriously though, I’m out.

  • Economics is descriptive, not prescriptive. The whole concept of “a job” is made up and arbitrary.

    You say an AGI would need to do everything a human can. Great, here are some things that humans do: love, think, contemplate, reflect, regret, aspire, etc. these require consciousness.

    Also, as you conveniently ignored, philosophy, politics, science are among the most important non-family-oriented “jobs” we humans do. They require consciousness.

    Plus, if a machine does what it’s told, then someone would be telling it what to do. That’s a job that a machine cannot do. But most of our jobs are already about telling machines what to do. If an AGI is not self-directed, it can’t tell other machines what to do, unless it is itself told what to do. But then someone is telling it what to do, which is “a job.”

  • Your definition of AGI as doing “jobs” is arbitrary, since the concept of “a job” is made up; literally anything can count as economic labor.

    For instance, people frequently discuss AGI replacing governments. That would require the capacity for leadership. It would require independence of thought and creative deliberation. We simply cannot list (let alone program) all human goals and values. It is logically impossible to axiomatize our value systems. The values would need to be intuited. This is a very famous result in mathematics called Gödel's first incompleteness theorem.

    To quote Gödel himself: “We cannot mechanize all of our intuitions.”

    Alan Turing drew the same conclusion a few years later with The Halting Problem.

    In other words, if we want to build a machine that shares our value system, we will need to do so in such a way that it can figure out our values for itself. How? Well, presumably by being conscious. I would be happy if we could do so without its being conscious, but that’s my point: nobody knows how. Nobody even knows where to begin to guess how. That’s why AGI is so problematic.

  • Matter to whom?

    We are discussing whether creating an AGI is possible, not whether humans can tell the difference (which is a separate question).

    Most people can’t identify a correct mathematical equation from an incorrect one, especially when the solution is irrelevant to their lives. Does that mean that doing mathematics correctly “doesn’t matter?” It would be weird to enter a mathematical forum and ask “Why does it matter?”

    Whether we can build an AGI is just a curious question, whose answer for now is No.

    P.S. defining AGI in economic terms is like defining CPU in economic terms: pointless. What is “economically important labor”? Arguably the most economically important labor is giving birth, raising your children, and supporting your family. So would an AGI be some sort of inorganic uterus as well as a parent and a lover? Lol.

    That’s a pretty tall order, if AGI also has to do philosophy, politics, and science. All fields that require the capacity for rational deliberation and independent thought, btw.

  • A malfunctioning nuke can also destroy humanity. So could a toaster, under the right circumstances.

    The question is not whether we can create a machine that can destroy humanity. (Yes.) Or cure cancer. (Maybe.) The question is whether we can create a machine that can think. (No.)

    What I was discussing earlier in this thread was whether we (scientists) can build an AGI. Not whether we can create something that looks like an AGI, or whether there’s an economic incentive to do so. None of that has any bearing.

    In English, the phrase “what most people mean when they say” idiomatically translates to something like “what I and others engaged in this specific discussion mean when we say.” It’s not a claim about how the general population would respond to a poll.

    Hope that helps!

  • Okay, we can create the illusion of thought by executing complicated instructions. But there’s still a difference between a machine that does what it’s told and one that thinks for itself. The fact that it might be crazy is irrelevant, since we don’t know how to make it, at all, crazy or not.

  • That’s fine, but most people (engaged in this discussion) aren’t interested in an illusion. When they say AGI, they mean an actual mind capable of rationality (which requires sensitivity and responsiveness to reasons).

    Calculators, LLMs, and toasters can’t think or understand or reason by definition, because they can only do what they’re told. An AGI would be a construct that can think for itself. Like a human mind, but maybe more powerful. That requires subjective understanding (intuitions) that cannot be programmed. For more details on why, see Gödel's incompleteness theorems. We can’t even axiomatize mathematics, let alone human intuitions about the world at large. Even if it’s possible we simply don’t know how.