AGI achieved 🤖
AGI achieved 🤖
AGI achieved 🤖
It's funny how people always quickly point out that an LLM wasn't made for this, and then continue to shill it for use cases it wasn't made for either (The "intelligence" part of AI, for starters)
LLM wasn’t made for this
There's a thought experiment that challenges the concept of cognition, called The Chinese Room. What it essentially postulates is a conversation between two people, one of whom is speaking Chinese and getting responses in Chinese. And the first speaker wonders "Does my conversation partner really understand what I'm saying or am I just getting elaborate stock answers from a big library of pre-defined replies?"
The LLM is literally a Chinese Room. And one way we can know this is through these interactions. The machine isn't analyzing the fundamental meaning of what I'm saying, it is simply mapping the words I've input onto a big catalog of responses and giving me a standard output. In this case, the problem the machine is running into is a legacy meme about people miscounting the number of "r"s in the word Strawberry. So "2" is the stock response it knows via the meme reference, even though a much simpler and dumber machine that was designed to handle this basic input question could have come up with the answer faster and more accurately.
When you hear people complain about how the LLM "wasn't made for this", what they're really complaining about is their own shitty methodology. They build a glorified card catalog. A device that can only take inputs, feed them through a massive library of responses, and sift out the highest probability answer without actually knowing what the inputs or outputs signify cognitively.
Even if you want to argue that having a natural language search engine is useful (damn, wish we had a tool that did exactly this back in August of 1996, amirite?), the implementation of the current iteration of these tools is dogshit because the developers did a dogshit job of sanitizing and rationalizing their library of data. Also, incidentally, why Deepseek was running laps around OpenAI and Gemini as of last year.
Imagine asking a librarian "What was happening in Los Angeles in the Summer of 1989?" and that person fetching you back a stack of history textbooks, a stack of Sci-Fi screenplays, a stack of regional newspapers, and a stack of Iron-Man comic books all given equal weight? Imagine hearing the plot of the Terminator and Escape from LA intercut with local elections and the Loma Prieta earthquake.
That's modern LLMs in a nutshell.
You've missed something about the Chinese Room. The solution to the Chinese Room riddle is that it is not the person in the room but rather the room itself that is communicating with you. The fact that there's a person there is irrelevant, and they could be replaced with a speaker or computer terminal.
Put differently, it's not an indictment of LLMs that they are merely Chinese Rooms, but rather one should be impressed that the Chinese Room is so capable despite being a completely deterministic machine.
If one day we discover that the human brain works on much simpler principles than we once thought, would that make humans any less valuable? It should be deeply troubling to us that LLMs can do so much while the mathematics behind them are so simple. Arguments that because LLMs are just scaled-up autocomplete they surely can't be very good at anything are not comforting to me at all.
That's a very long answer to my snarky little comment :) I appreciate it though. Personally, I find LLMs interesting and I've spent quite a while playing with them. But after all they are like you described, an interconnected catalogue of random stuff, with some hallucinations to fill the gaps. They are NOT a reliable source of information or general knowledge or even safe to use as an "assistant". The marketing of LLMs as being fit for such purposes is the problem. Humans tend to turn off their brains and to blindly trust technology, and the tech companies are encouraging them to do so by making false promises.
You might just love Blind Sight. Here, they're trying to decide if an alien life form is sentient or a Chinese Room:
"Tell me more about your cousins," Rorschach sent.
"Our cousins lie about the family tree," Sascha replied, "with nieces and nephews and Neandertals. We do not like annoying cousins."
"We'd like to know about this tree."
Sascha muted the channel and gave us a look that said Could it be any more obvious? "It couldn't have parsed that. There were three linguistic ambiguities in there. It just ignored them."
"Well, it asked for clarification," Bates pointed out.
"It asked a follow-up question. Different thing entirely."
Bates was still out of the loop. Szpindel was starting to get it, though.. .
a much simpler and dumber machine that was designed to handle this basic input question could have come up with the answer faster and more accurately
The human approach could be to write a (python) program to count the number of characters precisely.
When people refer to agents, is this what they are supposed to be doing? Is it done in a generic fashion or will it fall over with complexity?
Yes but have you considered that it agreed with me so now i need to defend it to the death against you horrible apes, no matter the allegation or terrain?
Can we say for certain that human brains aren’t sophisticated Chinese rooms…
(damn, wish we had a tool that did exactly this back in August of 1996, amirite?)
Wait, what was going on in August of '96?
Imagine asking a librarian "What was happening in Los Angeles in the Summer of 1989?" and that person fetching you ... That's modern LLMs in a nutshell.
I agree, but I think you're still being too generous to LLMs. A librarian who fetched all those things would at least understand the question. An LLM is just trying to generate words that might logically follow the words you used.
IMO, one of the key ideas with the Chinese Room is that there's an assumption that the computer / book in the Chinese Room experiment has infinite capacity in some way. So, no matter what symbols are passed to it, it can come up with an appropriate response. But, obviously, while LLMs are incredibly huge, they can never be infinite. As a result, they can often be "fooled" when they're given input that semantically similar to a meme, joke or logic puzzle. The vast majority of the training data that matches the input is the meme, or joke, or logic puzzle. LLMs can't reason so they can't distinguish between "this is just a rephrasing of that meme" and "this is similar to that meme but distinct in an important way".
It's marketed like its AGI, so we should treat it like AGI to show that it isn't AGI. Lots of people buy the bullshit
Maybe they should call it what it is
Machine Learning algorithms from 1990 repackaged and sold to us by marketing teams.
Hey now, that's unfair and queerphobic.
These models are from 1950, with juiced up data sets. Alan turing personally sid a lot of work on them, before he cracked the math and figured out they were shit and would always be shit.
Machine learning algorithm from 2017, scaled up a few orders of magnitude so that it finally more or less works, then repackaged and sold by marketing teams.
I would say more "blackpilling", i genuinely don't believe most humans are people anymore after dealing with this.
Fair point, but a big part of "intelligence" tasks are memorization.
Biggest threat to humanity
I know there’s no logic, but it’s funny to imagine it’s because it’s pronounced Mrs. Sippy
It is going to be funny those implementation of LLM in accounting software
Worked well for me
I understand its probably more user friendly, but yet I still somehow find myself dissapointed the answers weren't indexed from zero. Was this LLM written in MATLAB?
Most users aren't used to zero index so they would most likely think there was a problem with it haha
One of the interesting things I notice about the 'reasoning' models is their responses to questions occasionally include what my monkey brain perceives as 'sass'.
I wonder sometimes if they recognise the trivialness of some of the prompts they answer, and subtilly throw shade.
One's going to respond to this with 'clever monkey! 🐒 Have a banana 🍌.'
Nice Rs.
Is this ChatGPT o3-pro?
ChatGPT 4o
I really like checking these myself to make sure it’s true. I WAS NOT DISAPPOINTED!
(Total Rs is 8. But the LOGIC ChatGPT pulls out is ……. remarkable!)
Try with o4-mini-high. It’s made to think like a human by checking its answer and doing step by step, rather than just kinda guessing one like here
What is this devilry?
I asked it how many Ts are in names of presidents since 2000. It said 4 and stated that "Obama" contains 1 T.
Toebama
How many times do I have to spell it out for you chargpt? S-T-R-A-R-W-B-E-R-R-Y-R
We gotta raise the bar, so they keep struggling to make it “better”
Tested on ChatGPT o4-mini-high
It sent me this
0 0 0 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 1 1 1 0 0 1 1 1 0 0 0 0 0 0 0 1 1 1 0 0 0 0 1 1 1 0 0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1 0 0 0 0
I asked it to remove the spaces
0001111100000000 0011111111000000 0011111110000000 0111111111100000 0111111111110000 0011111111100000 0001111111000000 0011111100000000 0111111111100000 1111111111110000 1111111111110000 1111111111110000 1111111111110000 0011100111000000 0111000011100000 1111000011110000
I guess I just murdered a bunch of trees and killed a random dude with the water it used, but it looks good
It's all about weamwork 🤝
weamwork is my new favorite word, ahahah!
teamwork makes the teamwork makes the teamwork makes the teamwork makes the teamwork makes the teamwork makes the teamwork makes the
AI is amazing, we're so fucked.
/s
Unironically, we are fucked when management think AI can replace us. Not when AI can actually replace us.
Deep reasoning is not needed to count to 3.
It is if you're creating ragebait.
Honey, AI just did something new. It's time to move the goalposts again.
When we see LLMs struggling to demonstrate an understanding of what letters are in each of the tokens that it emits or understand a word when there are spaces between each letter, we should compare it to a human struggling to understand a word written in IPA format (/sʌtʃ əz ðɪs/) even though we can understand the word spoken aloud normally perfectly fine.
But if you've learned IPA you can read it just fine
I know IPA but I can't read English text written in pure IPA as fast as I can read English text written normally. I think this is the case for almost anyone who has learned the IPA and knows English.
"A guy instead"
Reality:
The AI was trained to answer 3 to this question correctly.
Wait until the AI gets burned on a different question. Skeptics will rightfully use it to criticize LLMs for just being stochastic parrots, until LLM developers teach their models to answer it correctly, then the AI bros will use it as a proof it becoming "more and more human like".
No but see they're not skeptics, they're just haters, and there is no valid criticism of this tech. Sorry.
And also youve just been banned from like twenty places tor being A FANATIC "anti ai shill". Genuinely check the mod log, these fuckers are cultists.
We are fecking doomed!
Maybe OP was low on the priority list for computing power? Idk how this stuff works
Singularity is here
o3-pro? Damn, that's an expensive goof
People who think that LLMs having trouble with these questions is evidence one way or another about how good or bad LLMs are just don't understand tokenization. This is not a symptom of some big-picture deep problem with LLMs; it's a curious artifact like in a jpeg image, but doesn't really matter for the vast majority of applications.
You may hate AI but that doesn't excuse being ignorant about how it works.
These sorts of artifacts wouldn't be a huge issue except that AI is being pushed to the general public as an alternative means of learning basic information. The meme example is obvious to someone with a strong understanding of English but learners and children might get an artifact and stamp it in their memory, working for years off bad information. Not a problem for a few false things every now and then, that's unavoidable in learning. Thousands accumulated over long term use, however, and your understanding of the world will be coarser, like the Swiss cheese with voids so large it can't hold itself up.
You're talking about hallucinations. That's different from tokenization reflection errors. I'm specifically talking about its inability to know how many of a certain type of letter are in a word that it can spell correctly. This is not a hallucination per se -- at least, it's a completely different mechanism that causes it than whatever causes other factual errors. This specific problem is due to tokenization, and that's why I say it has little bearing on other shortcomings of LLMs.
Also just checked and every open ai model bigger than 4.1-mini can answer this. I think the joke should emphasize how we developed a super power inefficient way to solve some problems that can be accurately and efficiently answered with a single algorithm. Another example is using ChatGPT to do simple calculator math. LLMs are good at specific tasks and really bad at others, but people kinda throw everything at them.
And yet they can seemingly spell and count (small numbers) just fine.
what do you mean by spell fine? They're just emitting the tokens for the words. Like, it's not writing "strawberry," it's writing tokens <302, 1618, 19772>, which correspond to st, raw, and berry respectively. If you ask it to put a space between each letter, that will disrupt the tokenization mechanism, and it's going to be quite liable to making mistakes.
I don't think it's really fair to say that the lookup 19772 -> berry counts as the LLM being able to spell, since the LLM isn't operating at that layer. It doesn't really emit letters directly. I would argue its inability to reliably spell words when you force it to go letter-by-letter or answer queries about how words are spelled is indicative of its poor ability to spell.
The problem is that it's not actually counting anything. It's simply looking for some text somewhere in its database that relates to that word and the number of R's in that word. There's no mechanism within the LLM to actually count things. It is not designed with that function. This is not general AI, this is a Generative Adversarial Network that's using its vast vast store of text to put words together that sound like they answer the question that was asked.
I get the meme aspect of this. But just to be clear, it was never fair to judge LLMs for specifically this. The LLM doesn't even see the letters in the words, as every word is broken down into tokens, which are numbers. I suppose with a big enough corpus of data it might eventually extrapolate which words have which letter from texts describing these words, but normally it shouldn't be expected.
True and I agree with you yet we are being told all job are going to disappear, AGI is coming tomorrow, etc. As usual the truth is more balanced
I've actually messed with this a bit. The problem is more that it can't count to begin with. If you ask it to spell out each letter individually (ie each letter will be its own token), it still gets the count wrong.
I know that words are tokenized in the vanilla transformer. But do GPT and similar LLMs still do that as well? I assumed they also tokenize on character/symbol level, possibly mixed up with additional abstraction down the chain.
I don't know what part of what I said prompted all those downvotes, but of course all the reasonable people understood, that the "AGI in 2 years" was a stock price pump.
Next step how many r in Lollapalooza
Incredible
Agi lost
Apparently, this robot is japanese.
I'm going to hell for laughing at that
Obligatory 'lore dump' on the word lollapalooza:
That word was a common slang term in the 1930s/40s American lingo that meant... essentially a very raucous, lively party.
So... in WW2, in the Pacific theatre... many US Marines were often engaged in brutal, jungle combat, often at night, and they adopted a system of basically verbal identification challenge checks if they noticed someone creeping up on their foxholes at night.
An example of this system used in the European theatre, I believe by the 101st and 82nd airborne, was the challenge 'Thunder!' to which the correct response was 'Flash!'.
In the Pacific theatre... the Marines adopted a challenge / response system... where the correct response was 'Lolapalooza'...
Because native born Japanese speakers are taught a phoneme that is roughly in between and 'r' and an 'l' ... and they very often struggle to say 'Lolapalooza' without a very noticable accent, unless they've also spent a good deal of time learning spoken English (or some other language with distinct 'l' and 'r' phonemes), which very few Japanese did in the 1940s.
::: spoiler racist and nsfw historical example of / evidence for this
https://www.ep.tc/howtospotajap/howto06.html
:::
Now, some people will say this is a total myth, others will say it is not.
My Grandpa who served in the Pacific Theatre during WW2 told me it did happen, though he was Navy and not a Marine... but the other stories about this I've always heard that say it did happen, they all say it happened with the Marines.
My Grandpa is also another source for what 'lolapalooza' actually means.
https://en.wikipedia.org/wiki/Shibboleth
I’ve heard “squirrel” was used to trap Germans.
I'm still puzzled by the idea of what mess this war was if at times you had someone still not clearly identifiable, but that close you can do a sheboleth check on them, and that at any moment you or the other could be shot dead.
Also, the current conflict of Russia vs Ukraine seems to invent ukrainian 'паляница' as a check, but as I had no connection to actual ukrainians and their UAF, I can't say if that's not entirely localized to the internet.
It does make sense to use a phoneme the enemy dialect lacks as a verbal check. Makes me wonder if there were any in the Pacific Theatre that decided for "Lick" and "Lollipop".
Thanks for sharing
Try it with o3 maybe it needs time to think 😝
which model is it? I had a similar answer with 3.5, but 4o replies correctly
With Reasoning (this is QWEN on hugginchat it says there is Zero)