Skip Navigation

Posts
1
Comments
220
Joined
2 yr. ago

  • That last sentence you wrote exemplifies the reductionism I mentioned:

    It does, by showing it can learn associations with just limited time from a human's perspective, it clearly experienced the world.

    Nope that does not mean it experienced the world, that's the reductionist view. It's reductionist because you said it learnt from a human perspective, which it didn't. A human's perspective is much more than a camera and a microphone in a cot. And experience is much more than being able to link words to pictures.

    In general, you (and others with a similar view) reduce complexity of words used to descibe conciousness like "understanding", "experience" and "perspective" so they no longer carry the weight they were intended to have. At this point you attribute them to neural networks which are just categorisation algorithms.

    I don't think being alive is necessarily essential for understanding, I just can't think of any examples of non-living things that understand at present. I'd posit that there is something more we are yet to discover about consciousness and the inner workings of living brains that cannot be fully captured in the mathematics of neural networks as yet. Otherwise we'd have already solved the hard problem of consciousness.

    I'm not trying to shift the goalposts, it's just difficult to convey concisely without writing a wall of text. Neither of the links you provided are actual evidence for your view because this isn't really a discussion that evidence can be provided for. It's really a philosophical one about the nature of understanding.

  • Yes you do unless you have a really reductionist view of the word "experience".

    Besides, that article doesn't really support your statement, it just shows that a neural network can link words to pictures, which we know.

  • Yes sorry probably shouldn't have used the word "human". It's a concept that we apply to living things that experience the world.

    Animals certainly understand things but it's a sliding scale where we use human understanding as the benchmark.

    My point stands though, to attribute it to an algorithm is strange.

  • Understanding is a human concept so attributing it to an algorithm is strange.

    It can be done by taking a very shallow definition of the word but then we're just entering a debate about semantics.

  • Whilst everything you linked is great research which demonstrates the vast capabilities of LLMs, none of it demonstrates understanding as most humans know it.

    This argument always boils down to one's definition of the word "understanding". For me that word implies a degree of consciousness, for others, apparently not.

    To quote GPT-4:

    LLMs do not truly understand the meaning, context, or implications of the language they generate or process. They are more like sophisticated parrots that mimic human language, rather than intelligent agents that comprehend and communicate with humans. LLMs are impressive and useful tools, but they are not substitutes for human understanding.

  • I believe OP is attempting to take on an army of straw men in the form of a poorly chosen meme template.

  • I fully back your sentiment OP; you understand as much about the world as any LLM out there and don't let anyone suggest otherwise.

    Signed, a "contrarian".

  • Pretty much same. Around 2012 it really became apparent that nothing was going to be done in time and I personally flipped from "Science/tech will save us!" to pessimist. At this point it's just realism.

    The way the world handled Covid was the final nail in the coffin for me when the majority of humanity demonstrated that they can't/won't behave as a collective to save lives if it inconveniences them. It was the perfect test run for what is to come and most made it abundantly clear they can't cope with any kind of disruption to their capitalistic routine.

    Now the data is beginning to show in the graphs the news is slowly seeping into mainstream circles. But at this it's way too late and nothing short of ditching the idea of growth and uniting/mobilising the entire world against the issue will solve it.

    Luckily my partner is fully aware too so we're just making what we can of the time we have left. My friends and family on the other hand are busy having kids and whilst appear to listen, obviously don't grasp the gravity of the situation.

  • OP's solved it everyone!

    We all just need to get in our cars that we definitely have and cross oceans to a Lemmy meetup where we amass in our hundreds to bring down the corporate hegemony, solve climate change and live out the rest of our days remotely working together in peace.

  • This graph suggests the latter.

    Not to mention the rising tensions around the globe reminiscent of the 1930s.

  • I interpreted it as they'd happily solve our problems providing we bring our own solutions.

  • For some reason I find it absolutely hilarious that some idiots have downvoted this.

    "Please keep your existential dread to yourself as we only really want to hear problems that can be fixed with a pithy Lemmy comment."

    Sorry bro, not much consolation but I feel you.

  • Modern civilisation is ending and likely cannot be stopped.

    Suggestions on a postcard pls.

  • Lol indeed, just seen you moderate a Simulation Theory sub.

    Congratulations, you have completed the tech evangelist starter pack.

    Next thing you'll be telling me we don't have to worry about climate change because we'll just use carbon capture tech and failing that all board Daddy Elon's spaceship to teraform Mars.

  • Have you ever considered you might be the laypeople?

    Equating a debate about the origin of understanding to antivaxxers...

    You argue like a Trump supporter.

  • To hijack your analogy its more akin to me stating a tree is a plant and you saying "So are these" pointing at a forest of plastic Christmas trees.

    I'm pretty curious why you imagine you have so many downvotes?

  • You posted the article rather than the research paper and had every chance of altering the headline before you posted it but didn't.

    You questioned why you were downvoted so I offered an explanation.

    Your attempts to form your own arguments often boil down to "no you".

    So as I've said all along we just differ on our definitions of the term "understanding" and have devolved into a semantic exchange. You are now using a bee analogy but for a start that is a living thing not a mathematical model, another indication that you don't understand nuance. Secondly, again, it's about definitions. Bees don't understand the number zero in the middle of the number line but I'd agree they understand the concept of nothing as in "There is no food."

    As you can clearly see from the other comments, most people interpret the word "understanding" differently from yourself and AI proponents. So I infer you are either not a native English speaker or are trying very hard to shoehorn your oversimplified definition in to support your worldview. I'm not sure which but your reductionist way of arguing is ridiculous as others have pointed out and full of logical fallacies which you don't seem to comprehend either.

    Regarding what you said about Pythag, I agree and would expect it to outperform statistical analysis. That is due to the fact that it has arrived at and encoded the theorem within its graphs but I and many others do not define this as knowledge or understanding because they have other connotations to the majority of humans. It wouldn't for instance be able to tell you what a triangle is using that model alone.

    I spot another apeal to authority... "Hinton said so and so..." It matters not. If Hinton said the sky is green you'd believe it as you barely think for yourself when others you consider more knowledgeable have stated something which may or may not be true. Might explain why you have such an affinity for AI...

  • Title of your post is literally "New Theory Suggests Chatbots Can Understand Text".

    You also hinted at it with your Pythag analogy.