If robots could lie, would we be okay with it? A new study throws up intriguing results.
If robots could lie, would we be okay with it? A new study throws up intriguing results.
If robots could lie, would we be okay with it? A new study throws up intriguing results
What do you call LLMs other than bullshit generators.
Bullshitting implies intention to do so. LLMs make mistakes, just like humans.
An LLMs "intent" is always to give you a plausible response even if it doesn't have the "knowledge". The same behaviour in a human would be classed as lying IMHO.