Skip Navigation

User banner
Posts
4
Comments
552
Joined
2 yr. ago

  • That’s how LLMs work.

    This is not how LLMs work. LLMs do not have complex thought webs correlating concepts birds, flightlessness, extinction, food, and so on. That is how humans work.

    An LLM assembles a mathematical model of what word should follow any other word by analyzing terabytes of data. If in its training corpus the nearest word to "dodo" is "attractive," the LLM will almost always tell you that dodos are attractive. This is not because those concepts are actually related to the LLM, because the LLM is attracted to dodos, or because LLMs have any thoughts at all. It is simply the output of bunch of math based on word proximity.

    Humans have cognition and mental models. LLMs have frequency and word weights. While you have correctly identified that both of these things can be portrayed as n-dimensional matrixes, you can also use those tools to describe electrical currents or the movement of stars. But those things contain no more thought and have no more mental phenomenon occurring in them than LLMs.

  • How is that germane to this question? Do you agree humans can experience mental phenomena? Like, do you think I have any mental models at all?

    If so, then that is the difference between me and an LLM.

  • This is a somewhat sensationalist and frankly uninteresting way to describe neural networks. Obviously it would take years of analysis to understand the weights of each individual node and what they're accomplishing (if it is even understandable in a way that would make sense to people without very advanced math degrees). But that doesn't mean we don't understand the model or what it does. We can and we do.

    You have misunderstood this article if what you took from it is this:

    It’s also very similar in the way that nobody actually can tell precisely how it works, for some reason it just does.

    We do understand how it works -- as an overall system. Inspecting the individual nodes is as irrelevant to understanding an LLM as cataloguing trees in a forest tells you the name of the city to which the forest is adjacent.

  • You can't! It's like describing fire to someone that's never experienced fire.

    This is the root of experience and memory and why humans are different from LLMs. Which, again, can never understand or experience a cat or fire. But the difference is more fundamental than that. To an LLM, there is no difference between fire and cat. They are simply words with frequencies attached that lead to other words. Their difference is the positions they occupy in a mathematical model where sometimes it will output one instead of the other, nothing more.

    Unless you're arguing my inability to express a mental construct to you completely means I myself don't experience it. Which I think you would agree is absurd?

  • Lol what the fuck? We know exactly how LLMs work. It's not magic, and it's nothing like a human brain. They're literally word frequency algorithms. There's nothing special about them and I'm the opposite of threatened; I think it's absurd people who patently don't understand them are weighing on this debate disagreeing with me when it's obvious their position can best be described as ignorant.

  • I never said the brain (or memory) was a database. I said it was more like a database than what LLMs have, which is nothing.

  • But that’s world apart from saying that the cross-linking and mutual dependencies in a metric concept-space is not remotely analogous between humans and large models.

    It's not a world apart; it is the difference itself. And no, they are not remotely analogous.

    When we talk about a "cat," we talk about something we know and experience; something we have a mental model for. And when we speak of cats, we synthesize our actual lived memories and experiences into responses.

    When an LLM talks about a "cat," it does not have a referent. There is no internal model of a cat to it. Cat is simply a word with weights relative to other words. It does not think of a "cat" when it says "cat" because it does not know what a "cat" is and, indeed, cannot think at all. Think of it as a very complicated pachinko machine, as another comment pointed out. The ball you drop is the question and it hits a bunch of pegs on the way down that are words. There is no thought or concept behind the words; it is simply chance that creates the output.

    Unless you truly believe humans are dead machines on the inside and that our responses to prompts are based merely on the likelihood of words being connected, then you also believe that humans and LLMs are completely different on a very fundamental level.

  • I said:

    No, you knowing your old phone number is closer to how a database knows things than how LLMs know things.

    Which is true. Human memory is more like a database than an LLM's "memory." You have knowledge in your brain which you can consult. There is data in a database that it can consult. While memory is not a database, in this sense they are similar. They both exist and contain information in some way that can be acted upon.

    LLMs do not have any database, no memories, and contain no knowledge. They are fundamentally different from how humans know anything, and it's pretty accurate to say LLMs "know" nothing at all.

  • You can indeed tell if something is true or untrue. You might be wrong, but that is quite different -- you can have internal knowledge that is right or wrong. The very word "misremembered" implies that you did (or even could) know it properly.

    LLMs do not retain facts and they can and frequently do get information wrong.

    Here's a good test. Choose a video game or TV show you know really well -- something a little older and somewhat complicated. Ask ChatGPT about specific plot points in the video game.

    As an example, I know Final Fantasy 14 extremely well and have played it a long time. ChatGPT will confidently state facts about the game that are entirely and totally incorrect: it confuses characters, it moves plot points around. This is because it chooses what is likely to say, not what is actually correct. Indeed, it has no ability to know what is correct at all.

    AI is not a simulation of human neural networks. It uses the concept of mathematical neural networks, but it is a word model, nothing more.

  • No, the way humans know things and LLMs know things is entirely different.

    The flaw in your understanding is believing that LLMs have internal representations of memes and cats and cars. They do not. They have no memories or internal facts... whereas I think most people agree that humans can actually know things and have internal memories and truths.

    It is fundamentally different from asking you to forget that cats exist. You are incapable of altering your memories because that is how brains work. LLMs are incapable of removing information because the information is used to build the model with which they choose their words, which is then undifferentiatable when it's inside the model.

    An LLM has no understanding of anything you ask it and is simply a mathematical model of word weights. Unless you truly believe humans have no internal reality and no memories and simply say things based on what is the most likely response, you also believe humans and LLM knowledge is entirely different to each other.

  • Yes, but only by chance.

    Human brains can't forget because human brains don't operate that way. LLMs can't forget because they don't know information to begin with, at least not in the same sense that humans do.

  • The difference is LLMs don't "remember" anything because they don't "know" anything. They don't know facts, English, that reality exists; they have no internal truths, simply a mathematical model of word weights. You can't ask it to forget information because it knows no information.

    This is obviously quite different from asking a human to forget anything; we can identify the information in our brain, it exists there. We simply have no conscious control over our ability to remember it.

    The fact that LLMs employ neural networks doesn't make them like humans or like brains at all.

  • It doesn't matter that there is no literal number in your brain and that there are instead chemical/electronic impulses. There is an impulse there signifying your childhood phone number. You did (and do) know that. And other things too presumably.

    While our brains are not perfectly efficient, we can and do actually store information in them. Information that we can judge as correct or incorrect; true or false; extant or nonexistent.

    LLMs don't know anything and never knew anything. Their responses are mathematical models of word likelihood.

    They don't understand English. They don't know what reality is like or what a phone number represents. If they get your phone number wrong, it isn't because they "misremembered" or because they're "uncertain." It's because it is literally incapable of retaining a fact. The phone number you asked it for is part of a mathematical model now, and it will return the output of that model, not the correct phone number.

    Conversely, even if you get your phone number wrong, it isn't because you didn't know it. It's because memory is imperfect and degrades over time.

  • But our memories exist -- I can say definitively "I know my childhood phone number." It might be meaningless, but the information is stored in my head. I know it.

    AI models don't know your childhood phone number, even if you tell them explicitly, even if they trained on it. Your childhood phone number becomes part of a model of word weights that makes it slightly more likely, when someone asks it for a phone number, that some digits of your childhood phone number might appear (or perhaps the entire thing!).

    But the original information is lost.

    You can't ask it to "forget" the phone number because it doesn't know it and never knew it. Even if it supplies literally your exact phone number, it isn't because it knew your phone number or because that information is correct. It's because that sequence of numbers is, based on its model, very likely to occur in that order.

  • No, you knowing your old phone number is closer to how a database knows things than how LLMs know things.

    LLMs don't "know" information. They don't retain an individual fact, or know that something is true and something else is false (or that anything "is" at all). Everything they say is generated based on the likelihood of a word following another word based on the context that word is placed in.

    You can't ask it to "forget" a piece of information because there's no "childhood phone number" in its memory. Instead there's an increased likelihood it will say your phone number as the result of someone prompting it to tell it a phone number. It doesn't "know" the information at all, it simply has become a part of the weights it uses to generate phrases.

  • It’s always the people you most suspect.

  • Where did the chicken go

  • I don't think safety courses and licensing are a huge barrier to entry though, unless we let them be. And on the other hand the safety benefits seem to be enormous.

    And yes, training and a license would indeed make a difference with how riders conduct themselves. Including wearing a helmet or paying attention.