If the concern is about "fears" as in "feelings"... there is an interesting experiment where a single neuron/weight in an LLM, can be identified to control the "tone" of its output, whether it be more formal, informal, academic, jargon, some dialect, etc. and expose it to the user for control over the LLM's output.
With a multi-billion neuron network, acting as an a priori black box, there is no telling whether there might be one or more neurons/weights that could represent "confidence", "fear", "happiness", or any other "feeling".
It's something to be researched, and I bet it's going to be researched a lot.
If you give ai instruction to do something "no matter what"
The interesting part of the paper, is that the AIs would do the same even in cases where they were NOT instructed to "no matter what". An apparently innocent conversation, can trigger results like those of a pathological liar, sometimes.
IANAL either, in recent streams from Judge Fleischer (Houston, Texas, USA) there have been some cases (yes, plural) where repeatedly texting a victim with life threats, or even texting a victim's friend to pass on a threat to the victim, have been considered a "terrorist threat".
As for the "sane country" part... đ¤ˇ... but from a strictly technical point of view, I think it makes sense.
I once knew a guy who was married to a friend, and he had a dog. He'd hit his own dog to make her feel threatened. Years went by, nobody did anything, she'd come to me crying, had multiple miscarriages... until he punched her, kicked out of the car, and left stranded on the road after a hiking trip. They divorced, went their separate ways, she found another guy, got married again, and nine months later they had twins.
So... would it've been sane to call what the guy did, "terrorism"? I'd vote yes.
When two systems based on neural networks act in the same way, how do you tell which one is "artificial, no intelligence" and which is "natural, intelligent"?
Misleading, is thinking that "intelligence = biological = natural". There is no inherent causal link between those concepts.
There are several separate issues that add up together:
A background "chain of thoughts" where a system ("AI") uses an LLM to re-evaluate and plan its responses and interactions by taking into account updated data (aka: self-awareness)
Ability to call external helper tools that allow it to interact with, and control other systems
Training corpus that includes:
How to program an LLM, and the system itself
Solutions to programming problems
How to use the same helper tools to copy and deploy the system or parts of it to other machines
How operators (humans) lie to each other
Once you have a system ("AI") with that knowledge and capabilities... shit is bound to happen.
When you add developers using the AI itself to help in developing the AI itself... expect shit squared.
Humans roleplay behaving like what humans told them/wrote about what they think a human would behave like đ¤ˇ
For a quick example, there are stereotypical gender looks and roles, but it applies to everything, from learning to speak, walk, the Bible, social media like this comment, all the way to the Unabomber manifesto.
It might be fine for non-interactive stuff where you can get all the frames in advance, like cutscenes. For anything interactive though, it just increases latency while adding imprecise partial frames.
If there is no Artificial Intelligence in an Artificial Neural Network... what's the basis for claiming Natural Intelligence in a Natural Neural Network?
Trust me, you wouldn't... to this day I regret having read all the books, still got an earworm (or is it PTSD?) from the music I used to listen at the time đł
Oops, you're right. Got carried away đ
Hm... you mean like what video compression algorithms do? I don't know of any game doing that, but it could be interesting to explore.