I've heard that before, but isn't this easily defended by the fact that people who listen to the same song over and over again exist?
I can listen to Ado music over and over, it gets better every time. So then there is familiarity and predictability (since I know that piece of music rather well by then).
I understand some of the hype. LLMs are pretty amazing nowadays (though closedai is unethical af so don't use them).
I need to program complex cryptography code for university. Claude sonnet 3.5 solves some of the challenges instantly.
And it's not trivial stuff, but things like "how do I divide polynomials, where each coefficient of that polynomial is an element of GF(2^128)." Given the context (my source code), it adds it seamlessly, writes unit tests, and it just works. (That is important for AES-GCM, the thing TLS relies on most of the time .)
Besides that, LLMs are good at what I call moving words around. Writing cute little short stories in fictional worlds given some info material, or checking for spelling, or re-formulating a message into a very diplomatic nice message, so on.
On the other side, it's often complete BS shoehorning LLMs into things, because "AI cool word line go up".
This is a call for world federalism. I support it, but it's probably ahead of our time. A democratic world federation (a truly united nations [of earth] perhaps) would be able to more effectively solve many (global) problems.
passwd