AI hallucinations are impossible to eradicate — but a recent, embarrassing malfunction from one of China’s biggest tech firms shows how they can be much more damaging there than in other countries
Kissaki @ Kissaki @beehaw.org Posts 33Comments 480Joined 1 yr. ago

Kissaki @ Kissaki @beehaw.org
Posts
33
Comments
480
Joined
1 yr. ago
I wouldn't call pasting verbatim training data hallucination when it fits the prompt. It's not necessarily making stuff up.
I feel like you're unfittingly mixing tool target behavior with technical limitations. Yes, it's not knowingly reasoning. But that doesn't change that the user interface is a prompt-style, with the goal of answering.
I think it's fitting terminology for encompassing multiple issues of false answers.
How would you call it? Only by their specific issues? Or would you use a general term, like "error" or "wrong"?