A courts reporter wrote about a few trials. Then an AI decided he was actually the culprit.
ApexHunter @ ApexHunter @lemmy.ml Posts 0Comments 145Joined 2 yr. ago
ApexHunter @ ApexHunter @lemmy.ml
Posts
0
Comments
145
Joined
2 yr. ago
They're supposed to be good a transformation tasks. Language translation, create x in the style of y, replicate a pattern, etc. LLMs are outstandingly good at language transformer tasks.
Using an llm as a fact generating chatbot is actually a misuse. But they were trained on such a large dataset and have such a large number of parameters (175 billion!?) that they passably perform in that role... which is, at its core, to fill in a call+response pattern in a conversation.
At a fundamental level it will never ever generate factually correct answers 100% of the time. That it generates correct answers > 50% of the time is actually quite a marvel.