Too Much Trust in AI Poses Unexpected Threats to the Scientific Process
GlitterInfection @ GlitterInfection @lemmy.world Posts 2Comments 529Joined 2 yr. ago
GlitterInfection @ GlitterInfection @lemmy.world
Posts
2
Comments
529
Joined
2 yr. ago
The part you're calling "a hell of a stretch" is actually the reason LLMs work. It's not a good text parser. It's a great pattern matcher. And it matches patterns that aren't obvious or intuitive.
Many of the listed uses are actually great for this type of tech.
In theory, because of the amount of data used, there should be matched patterns that would allow it to be used for psychological research. Replicating well known studies in that area with the tech is a good way to test that theory.
Using it as a first-line simulation might not be a bad idea as long as its followed up with a real study to validate the results.
We just need to make sure that humans are checking the work properly because, as you say, it's not sentient, nor is it really capable of following a code, like the scientific method.
The real thing to fear is humans not doing their part out of greed, laziness, or malice.