I'm looking for an article showing that LLMs don't know how they work internally
theunknownmuncher @ theunknownmuncher @lemmy.world Posts 3Comments 562Joined 1 yr. ago
theunknownmuncher @ theunknownmuncher @lemmy.world
Posts
3
Comments
562
Joined
1 yr. ago
It's true that LLMs aren't "aware" of what internal steps they are taking, so asking an LLM how they reasoned out an answer will just output text that statistically sounds right based on its training set, but to say something like "they can never reason" is provably false.
Its obvious that you have a bias and desperately want reality to confirm it, but there's been significant research and progress in tracing internals of LLMs, that show logic, planning, and reasoning.
EDIT: lol you can downvote me but it doesn't change evidence based research
Developing a AAA video game has a higher carbon footprint than training an LLM, and running inference uses significantly less power than playing that same video game.