Skip Navigation

User banner
Posts
4
Comments
552
Joined
2 yr. ago

  • No one is saying there's problems with the bots (though I don't understand why you're being so defensive of them -- they have no feelings so describing their limitations doesn't hurt them).

    The problem is what humans expect from LLMs and how humans use them. Their purposes is to string words together in pretty ways. Sometimes those ways are also correct. Being aware of what they're designed to do, and their limitations, seems important for using them properly.

  • A chatbot has no use for that, it’s just there to mush through lots of data and produce some, it doesn’t have or should worry about its own existence.

    It literally can't worry about its own existence; it can't worry about anything because it has no thoughts or feelings. Adding computational power will not miraculously change that.

    Add some long term memory, bigger prompts, bigger model, interaction with the Web, etc. and you can build a much more powerful bit of software than what we have today, without even any real breakthrough on the AI side.

    I agree this would be a very useful chatbot. But it is still not a toaster. Nor would it be conscious.

  • No one is saying "they're useless." But they are indeed bullshit machines, for the reasons the author (and you yourself) acknowledged. Their purposes is to choose likely words. That likely and correct are frequently the same shouldn't blind us to the fact that correctness is a coincidence.

  • Obviously you should do what you think is right, so I mean, I'm not telling you you're living wrong. Do what you want.

    The reason to not trust a human is different from the reasons not to trust an LLM. An LLM is not revealing to you knowledge it understands. Or even knowledge it doesn't understand. It's literally completing sentences based on word likelihood. It doesn't understand any of what it's saying, and none of it is rooted in any knowledge of the subject of any kind.

    I find that concerning in terms of learning from it. But if it worked for you, then go for it.

  • People think they are actually intelligent and perform reasoning. This article discusses how and why that is not true.

  • It is your responsibility to prove your assertion that if we just throw enough hardware at LLMs they will suddenly become alive in any recognizable sense, not mine to prove you wrong.

    You are anthropomorphizing LLMs. They do not reason and they are not lazy. The paper discusses a way to improve their predictive output, not a way to actually make them reason.

    But don't take my word for it. Go talk to ChatGPT. Ask it anything like this:

    "If an LLM is provided enough processing power, would it eventually be conscious?"

    "Are LLM neural networks like a human brain?"

    "Do LLMs have thoughts?"

    "Are LLMs similar in any way to human consciousness?"

    Just always make sure to check the output of LLMs. Since they are complicated autosuggestion engines, they will sometimes confidently spout bullshit, so must be examined for correctness. (As my initial post discussed.)

  • No, it's true, "luck" might be overstating it. There's a good chance most of what it says is as accurate as the corpus it was trained on. That doesn't personally make me very confident, but ymmv.

  • I'm not guessing. When I say it's a difference of kind, I really did mean that. There is no cognition here; and we know enough about cognition to say that LLMs are not performing anything like it.

    Believing LLMs will eventually perform cognition with enough hardware is like saying, "if we throw enough hardware at a calculator, it will eventually become alive." Even if you throw all the hardware in the world at it, there is no emergent property of a calculator that would create sentience. So too LLMs, which really are just calculators that can speak English. But just like calculators they have no conception of what English is and they do not think in any way, and never will.

  • Basically the problem is point 3.

    You obviously know some of what it's telling you is inaccurate already. There is the possibility it's all bullshit. Granted a lot of it probably isn't, but it will tell you the bullshit with the exact same level of confidence as actual facts... because it doesn't know Galois theory and it isn't teaching it to you, it's simply stringing sentences together in response to your queries.

    If a human were doing this we would rightly proclaim the human a bad teacher that didn't know their subject, and that you should go somewhere else to get your knowledge. That same critique should apply to the LLM as well.

    That said it definitely can be a useful tool. I just would never fully trust knowledge I gained from an LLM. All of it needs to be reviewed for correctness by a human.

  • Yeah definitely not saying it's not useful :) But it also doesn't do what people widely believe it does, so I think articles like this are helpful.

  • LLMs are fundamentally different from human consciousness. It isn't a problem of scale, but kind.

    They are like your phone's autocomplete, but very very good. But there's no level of "very good" for autocomplete that makes it a human, or will give it sentience, or allow it to understand the words it is suggesting. It simply returns the next most-likely word in a response.

    If we want computerized intelligence, LLMs are a dead end. They might be a good way for that intelligence to speak pretty sentences to us, but they will never be that themselves.

  • And also it's no replacement for actual research, either on the Internet or in real life.

    People assume LLMs are like people, in that they won't simply spout bullshit if they can avoid it. But as this article properly points out, they can and do. You can't really trust anything they output. (At least not without verifying it all first.)

  • They only use words in context, which is their problem. It doesn't know what the words mean or what the context means; it's glorified autocomplete.

    I guess it depends on what you mean by "information." Since all of the words it uses are meaningless to it (it doesn't understand anything of what it either is asked or says), I would say it has no information and knows nothing. At least, nothing more than a calculator knows when it returns 7 + 8 = 15. It doesn't know what those numbers mean or what it represents; it's simply returning the result of a computation.

    So too LLMs responding to language.

  • They don't generate facts, as the article says. They choose the next most likely word. Everything is confidently plausible bullshit. That some of it is also true is just luck.

  • I really don’t think it’s accurate to say China has never been colonized. Check out Chinese concessions for more.

  • I was mostly posting this because the last time LLMs came up, people kept on going on and on about how much their thoughts are like ours and how they know so much information. But as this article makes clear, they have no thoughts and know no information.

    In many ways they are simply a mathematical party trick; formulas trained on so much language, they can produce language themselves. But there is no “there” there.

  • Yes, this is how all LLMs function.

  • You really have to give it to nativists; they're willing to shoot themselves in the foot, so people overseas will also get shot in the foot.

  • found?