The Collapse of GPT: Will future artificial intelligence systems perform increasingly poorly due to AI-generated material in their training data?
The Collapse of GPT: Will future artificial intelligence systems perform increasingly poorly due to AI-generated material in their training data?
The Collapse of GPT – Communications of the ACM
I'm more concerned about LLMs collaping the whole idea of "real-world".
I'm not a machine learning expert but I do get the basic concept of training a model and then evaluating its output against real data. But the whole thing rests on the idea that you have a model trained with relatively small samples of the real world and a big, clearly distinct "real world" to check the model's performance.
If LLMs have already ingested basically the entire information in the "real world" and their output is so pervasive that you can't easily tell what's true and what's AI-generated slop "how do we train our models now" is not my main concern.
As an example, take the judges who found made-up cases because lawyers used a LLM. What happens if made-up cases are referenced in several other places, including some legal textbooks used in Law Schools? Don't they become part of the "real world"?
No, because there's still no case.
Law textbooks that taught an imaginary case would just get a lot of lawyers in trouble, because someone eventually will wanna read the whole case and will try to pull the actual case, not just a reference. Those cases aren't susceptible to this because they're essentially a historical record. It's like the difference between a scan of the declaration of independence and a high school history book describing it. Only one of those things could be bullshitted by an LLM.
Also applies to law schools. People do reference back to cases all the time, there's an opposing lawyer, after all, who'd love a slam dunk win of "your honor, my opponent is actually full of shit and making everything up". Any lawyer trained on imaginary material as if it were reality will just fail repeatedly.
LLMs can deceive lawyers who don't verify their work. Lawyers are in fact required to verify their work, and the ones that have been caught using LLMs are quite literally not doing their job. If that wasn't the case, lawyers would make up cases themselves, they don't need an LLM for that, but it doesn't happen because it doesn't work.
It happens all the time though. Made up and false facts being accepted as truth with no veracity.
So hard disagree.
My first thought was that it would make a cool sci fi story where future generations lose all documented history other than AI-generated slop, and factions war over whose history is correct and/or made-up disagreements.
And then I remembered all the real life wars of religion...
Would watch...
LLM are not going to be the future. The tech companies know it and are working on reasoning models that can look up stuff to fact check themselves. These are slower, use more power and are still a work in progress.
Look up stuff where? Some things are verifiable more or less directly: the Moon is not 80% made of cheese,adding glue to pizza is not healthy, the average human hand does not have seven fingers. A "reasoning" model might do better with those than current LLMs.
But for a lot of our knowledge, verifying means "I say X because here are two reputable sources that say X". For that, having AI-generated text creeping up everywhere (including peer-reviewed scientific papers, that tend to be considered reputable) is blurring the line between truth and "hallucination" for both LLMs and humans