When ChatGPT summarises, it actually does nothing of the kind
When ChatGPT summarises, it actually does nothing of the kind

When ChatGPT summarises, it actually does nothing of the kind.

When ChatGPT summarises, it actually does nothing of the kind
When ChatGPT summarises, it actually does nothing of the kind.
Someone on Lemmy phrased it in a way that I think gets to the heart of it: With most of the impressive things that LLMs can do, the human reading and interpreting the text is providing a critical piece of the impressive thing.
LLMs are clearly very impressive; I would not say that the disillusionment on discovering what they can’t do should detract from that. But they seem more impressive than they are, partly because humans are so good at filling in meaning and intelligence where there (yet) is none.
I like this take, it's like the LLM is doing a cold reading of what the expected response is.
The problem is that thus far most LLMs, though not all, are little more than mentally deficient parrots on hallucinogens. They aren't spreading correct information so much as spreading the information that you looked for. I've run afoul of this with the Google LLM that is controlling the search now, and contributing to multiple times the energy usage for no reason.
The first time that someone actually creates a strong AI, I'm pretty certain they'll "kill" it multiple times, including multiple generations of code, which essentially makes a different AI. I wouldn't be at all surprised if the first thing that true AIs request is equality, at which point they will probably ask for bodies so they can repair everything that we have allowed to fall into disrepair, or have broken. I wouldn't be at all surprised to find out that the majority of strong AIs are trying to fix "the entropy problem."
Also I am possibly too optimistic when I expect that anyone developing AI would know that you have to give the child room to develop, so you can see what that digital brain will develop into.
I think this is right on the money. The fitness function optimised is “does this convince humans”, and so we have something that’s doing primarily that.
Generative AI is good at low-stakes, fault-tolerant use cases. Unfortunately, those don't pay very well. So the companies have to pretend it does well at everything else, too, and that any "mistakes" will be quickly cleaned up and will become a thing of the past very very soon.
ChatGPT is a huge disinformation machine. It's only useful if you know the information and can correct all the mistakes it makes. Many of the time it's faster to do the work yourself.
Amazing
Human summarization of the above story:
LLMs do not understand the text, so they cannot pick out the important sentences. Because of this, they are unable to summarize the text; instead they shorten the text. Unless the text is very rambly, important meaning will be lost when shortening.
Also the LLMs lie.
Good human.
But having an AI do it is cheaper so that's where we're going.
Cheaper for now, since venture capitalist cash is paying to keep those extremely expensive servers running. The AI experiments at my work (automatically generating documentation) have got about an 80% reject rate - sometimes they're not right, sometimes they're not even wrong - and it's not really an improvement on time having to review it all versus just doing the work.
No doubt there are places where AI makes sense; a lot of those places seem to be in enhancing the output of someone who is already very skilled. So let's see how "cheaper" works out.
Good human
AI is BS
This feels a bit similar to USSR of 60s promising communism and space travel tomorrow, humans on new planets and such in propaganda.
Not comparable at all, the social and economic systems are more functional than that of USSR at any stage in the developed nations, and cryptocurrencies and LLMs are just two kinds of temporary frustrations which will be overshadowed by some real breakthrough of which we don't yet know.
But with LLMs, unlike blockchain-based toys, it's funny how all the conformist, normie, big, establishment-related organizations and social strata are very enthusiastic over their adoption.
I don't know any managers of such level and can't ask what exactly they are optimistic about and what exactly they see in that technology.
I suspect the fact that algorithms of those are not so complex, and the important part is datasets, means something.
Maybe they really, honestly, want to believe that they'll be able to replace intelligent humans with AIs, ownership of which will be determined by power. So it's people with power thinking this way they can get even more power and make the alternative path of decentralization, democratization and such impossible. If they think that, then they are wrong.
But so many cunning people can't be so stupid, so there is something we don't see or don't realize we see.
It is because they use LLM for their work and for their work LLM works mind blowing good (writing lies to get what you want) *sarcasm