Are humans really so predictable that algorithms can easily see thru us, or does continuous use of algorithm feeds make us predictable to their results?
Are humans really so predictable that algorithms can easily see thru us, or does continuous use of algorithm feeds make us predictable to their results?
Fun fact: LLMs that strictly generate the most predictable output are seen as boring and vacuous by human readers, so programmers add a bit of randomization they call “temperature”.
It’s that unpredictable element that makes LLMs seem humanlike—not the predictable part that’s just functioning as a carrier signal.
The unpredictable element is also why they absolutely suck at being the reliable sources of accurate information that they are being advertised to be.
Yeah, humans are wrong a lot of the time but AI forced into everything should be more reliable than the average human.
That’s not it. Even without any added variability they would still be wrong all the time. The issue is inherent to LLMs; they don’t actually understand your questions or even their own responses. It’s just the most probable jumble of words that would follow the question.
Is it? Is random variance the source of all hallucinations? I think it's not; it's more the fact that they don't understand what they're generating, they're just looking for the most statistically probable next character.
I'm not saying I agree with AI being shoehorned into everything, i'm seeing it being pushed into places it shouldn't first hand, but strictly speaking, things don't have to be more reliable if they're fast enough.
Quantum computers are inherently unreliable, but you can perform the same calculation multiple times and average the result / discard the outliers and it will still be faster than a classical computer.
Same thing like back when I was in grade school and teachers would say to not trust internet sources and make sure to look everything up in an physical book / encyclopedia because a book is more reliable. Like, yes, it is, but it also takes me 100x as long to look it up, so ultimately starting at Wikipedia is going to get me to the right answer faster, the vast majority of the time, even if it's not 100% accurate or reliable (this was nearer Wikipedia's original launch).
You just ruined the magic of ChatGpt for me lol. Fuck. I knew the illusion would break eventually but damn bro it's fuckin 6 in the morning.
i.e. their fundamental limitations is, ironically, why they are so easy to hype