I agree with you, and you worded what I was clumsily trying to say. Thank you:)
With naturalism I mean the philosphical idea that only natural laws and forces are present in this world. Or as an extension, the idea that here is only matter.
Doesn't that depend on your view of consciousness and if you hold the view of naturalism?
I thought science is starting to find more and more that a 100% naturalistic worldview is hard to keep up. (E: I'm no expert on this topic and the information and podcast I listen to are probably very biased towards my own view on this. The point I'm making is that to say "we are just neurons" is more a disputed topic for debate than actual fact when you dive a little bit into neuroscience)
I guess my initial question is almost more philosophical in nature and less deterministic.
Interesting thoughts! Now that I think about this, we as humans have a huge advantage by having not only language, but also sight, smell, hearing and taste. An LLM basically only has "language." We might not realize how much meaning we create through those other senses.
I'll see if I can find that article/paper about the chess moves. That sounds interesting!
Could it be that we ascribe an LLM with conceptual knowledge while in fact it is by chance? We as humans are masters at seeing patterns that aren't there. But then again, like another commenter said, maybe the question is more about conscience itself, and what that actually means. What it means to "understand" something.
After reading some of the comments and pondering this question myself, I think I may have thought of a good analogy that atleast helps me (even though I know fairly well how LLM's work)
An LLM is like a car on the road. It can follow all the rules, like breaking in front of a red light, turning, signaling etc. However, a car has NO understanding of any of the traffic rules it follows.
A car can even break those rules, even if its behaviour is intended (if you push the gas pedal at a red light, the car is not in the wrong because it doesn't KNOW the rules, it just acts on it).
Why this works for me is that when I give examples of human behaviour or animal behaviour, I automatically ascribe some sort of consciousness. An LLM has no conscious (as far as I know for now). This idea is exactly what I want to convey.
If I think of a car and rules, it is obvious to me that a car has no concept of rules, but still is part of those rules somehow.
I commented something similair on another post, but this is exactly why I find this phenomenon so hard to describe.
A teenager in a new group still has some understanding and has a mind. It knows many of the meaning of the words that are said. Sure, some catchphrases might be new, but general topics shouldn't be too hard to follow.
This is nothing like genAI. GenAI doesn't know anything at all. It has (simplified) a list of words that somehow are connected to eachother. But AI has no meaning of a wheel, what round is, what rolling is, what rubber is, what an axle is. NO understanding. Just words that happened to describe all of it. For us humans it is so difficult to understand that something uses language without knowing ANY of the meaning.
How can we describe this so our brains make sense that you can have language without understanding? The Chinese Room experiment comes close, but is quite complicated to explain as well I think.
Hmm, now that I read this, I have a thought: it might also be hard to wrap our heads around this issue because we all talk about AI as if it is an entity. Even the sentence "it makes shit up" gives AI some kind of credit that it "thinks" about things. It doesn't make shit up, it is doing exactly what it is programmed to do: create good sentences. It succeeds.
Maybe the answer is just to stop talking about AI's as "saying" things, and start talking about GenAI as "generating sentences"? That way, we emotionally distance ourselves from "it" and it's more difficult to ascribe consciousness to an AI.
I think what makes it hard to wrap your head around is that sometimes, this text is emotionally charged.
What I notice is that it's especially hard if an AI "goes rogue" and starts saying sinister and malicious things. Our brain immediatly jumps to "it has bad intent" when in reality it's jus taking some reddit posts where it happened to connect some troll messages or extremist texts.
How can we decouple emotionally when it feels so real to us?
Wait, the Dutch Optimel brand doesn't have attached caps. I think? Or I just mindlessly rip the caps off so they are loose? It doesn't make any sense to have those be attached with an angle like that.
You don't meed to have any advanced meta knowledge to play most games. There are options like playing against easier ai's or similarly skilled players.
Look at some Low Elo Legends from the game Age of Empires 2 on Youtube from T90. Most don't use advanced meta.
Heck, I as a kid never used advanced meta and had loads of fun.
The internet TELLS you that the latest meta is necessary and that you play suboptimally. But they're just optimizing the fun out of the game for you if you're not that kind of player.
This mentality is even worse in competetive shooters. People playing the latest "meta" even though they don't realize they don't even have the skill to pull that meta off. I wish the "internet" would just let players have fun in their own way. And that playing games "suboptimally" can still be just as fun and rewarding an experience.
Ooh yeah, the Flashpoint Archive is an amazing resource! I've dabbled in it before. Great blast from the past!