I thought legit ones existed, but I guess the concept exists but hasn't been paired with technology and scaled. Tech bros are more concerned about making a cheap buck than providing a good service so they'd rather come up with a shitty addictive service that you have to pay for forever rather than coming up with an efficient service that actually achieves the goal.
The issue is just more cost of multiple passes, so companies are trying to have it be "all-in-one"
Exactly, that's where the too slow part comes in. To get more robust behavior it needs multiple layers of meta analysis, but that means it would take way more text generation under the hood than what's needed for one shot output.
It doesn't understand, it just pulls from enough text written by humans that understand things that they wrote that it can retrieve the correct text from prior human understanding to give coherent answers.
A simulation of the world that it runs to do reasoning. It doesn't simulate anything, it just takes a list of words and then produces the next word in that list. When you're trying to solve a problem, do you just think, well I saw these words so this word comes next? No, you imagine the problem and simulate it in both physical and abstract terms to come up with an answer.
The worrisome thing is that LLMs are being given access to controlling more and more actions. With traditional programming sure there are bugs all but at least they're consistent. The context may make the bug hard to track down, but at the end of the day, the code is being interpreted by the processor exactly as it was written. LLMs could just go haywire for impossible to diagnose reasons. Deploying them safely in utilities where they have control over external systems will require a lot of extra non LLM safe guards that I do not see getting added enough, and that is concerning.
Why would anyone do free work for a corporation to profit off of?