AI hallucinations are getting worse – and they're here to stay
vintageballs @ vintageballs @feddit.org Posts 0Comments 64Joined 1 yr. ago
vintageballs @ vintageballs @feddit.org
Posts
0
Comments
64
Joined
1 yr. ago
In the case of reasoning models, definitely. Reasoning datasets weren't even a thing a year ago and from what we know about how the larger models are trained, most task-specific training data is artificial (oftentimes a small amount is human-generated and then synthetically augmented).
However, I think it's safe to assume that this has been the case for regular chat models as well - the self-instruct and ORCA papers are quite old already.