Tech experts are starting to doubt that ChatGPT and A.I. ‘hallucinations’ will ever go away: ‘This isn’t fixable’
Zeth0s @ Zeth0s @lemmy.world Posts 1Comments 666Joined 2 yr. ago
Zeth0s @ Zeth0s @lemmy.world
Posts
1
Comments
666
Joined
2 yr. ago
The network doesn't detect matches, but the model definitely works on similarities. Words are mapped in a hyperspace, with the idea that that space can mathematically retain conceptual similarity as spatial representation.
Words are transformed in a mathematical representation that is able (or at least tries) to retain semantic information of words.
But different meanings of the different words belongs to the words themselves and are defined by the language, model cannot modify them.
Anyway we are talking about details here. We could kill the audience of boredom
Edit. I asked gpt-4 to summarize the concepts. I believe it did a decent job. I hope it helps:
In essence, the entire process of token representation within the Transformer model can be seen as continuous transformations within a vector space. The space itself can be considered a learned representation where relative positions and directions hold semantic and syntactic significance. The model's training process essentially shapes this space in a way that facilitates accurate and coherent language understanding and generation.