How would WEI work? What signals does my computer send to convince the other computers that my computer is doing what they want? Is it based on some "trusted computer" hardware level bullshit that's already there? (I just want my computer to do what I want.)
If all browsers and standards organizations oppose this idea, but Google does it anyway and it succeeds and takes over, can you imagine how easy the anti-trust case will be?
Serious question: Is admitting that you did something illegal in a conversation enough to be convicted of a crime? For example, if I say "I bought a small amount of weed from another kid at school and smoked it last year", is my statement alone enough to convict me of a crime? To clarify, they don't know a date, they don't know a place, they don't know who I bought it from, they don't know how much I bought, or how much I smoked. They really don't even know if it actually happened (sometimes people say things happened that didn't actually happen, gasp).
The economy is similar, but it's a little easier than sup com. Energy to metal converters are cheap and if you balance them right you wont waste metal or energy.
My first reaction to this comment was "yeah, but the quality of the paper has nothing to do with whether or not it is true".
On second thought, I'm not sure about that. I mean, a low quality paper isn't a good signal, but on the other hand, the presentation of an argument doesn't change whether or not it's true.
At least we know there are other labs trying to replicate, we already have rumors of some replications.
Indeed, there seems to be some confusion about the wording too. She wrote instructions, like she was instructing a state-of-the-art LLM, "please alter this photo to make it look professional", but the AI can't understand sentence structure or instructions, it just looks for labels that match pictures. So the AI sees "photo, professional" and it sees her starting photo, and it alters the starting photo to produce something that resembles "photo, professional". It doesn't know what those other words mean.
That image highlights an important point, these AI produce an infinite number of images for any given prompt. It's easy to pick one and make conclusions based on just one, like this this article did, but you're literally ignoring infinity other images produced for the same prompt.
I've been playing Beyond All Reason, a free RTS that's like Supreme Commander or Total Annihilation. The game handles 8v8 team games quite well, I've never played on such large teams in a RTS game, it's fun.
I guess I agree with some of that. It's mostly a matter of definition though. Yes, if you define those terms in such a way that AI cannot fulfill them, then AI will not have them (according to your definition).
But yes, we know the AI is not "thinking" or "scheming", because it just sits there doing nothing when it's not answering a question. We can see that no computation is happening. So no thought. Sense of self... probably not, depends on definition. Reason? Depends on your definition. Yes, we know they are not like humans, they are computers, but they are capable of many things which we thought only humans could do 6 months ago.
Since we can't agree on definitions I will simply avoid all those words and say that state-of-the-art LLMs can receive text and make free form, logical, and correct conclusions based upon that text at a level roughly equal to human ability. They are capable of combining ideas together that have never been combined by humans, but yet are satisfying to humans. They can invent things that never appeared in their training data, but yet make sense to humans. They are capable of quickly adapting to new data within their context, you can give them information about a programming language they've never encountered before (not in their training data), and they can make correct suggestions about that programming language.
I know you can find lots of anecdotes about LLMs / GPT doing dumb things, but most of those were GPT3 which is no longer state-of-the-art.
Hey, that's the same reason I can write code and do maths!
I'm serious, the only reason I know how to code or do math is because I learned from other people, mostly by reading. It's the only reason I can do those things.
I don't know. I only believe that things will be worse if individuals cannot control these AIs.
Maybe these AI have reached a peak (at least for now), and so they aren't good enough to write a compelling novel. In that case, writers who produce good novels and get lucky will still get paid, because people will want to buy their work and read it.
Or maybe AI will quickly surpass all humans in writing ability, in which case, there's not much we can do. If the AI produces books that are better, then people will want AI produced books. They might have to get those from other countries, or they might have to get them from a secret AI someone is running on a beefy computer in their basement. If AI surpasses humans then that's not a happy day for writers, no way around it. Still, an AI that surpasses humans might help people in other ways, but only if we allow everyone to have and control their own AI.
As the industrial revolution threatened to swallow society Carl Marx wrote about how important it was that regular people be able to control "the means of production". At least that part of his philosophy has always resonated with me, because I want to be empowered as an individual, I want the power to create and compete in our society. It's the same now, AI threatens to swallow society and I want to be able to control my own AI for my own purposes.
If strong AI is coming, it's coming. If AI is going to be the source of power in society then I want regular people to have access to that power. It's not yet clear whether this is the case, but if strong AI is coming it's going to be big, and writers complaining about pay isn't going to stop it.
All that said, I believe we do a terrible job of caring for individuals in our society. We need more social safety nets, we need to change to provide people better and happier lives. So I'm not saying "forget the writers, let them stave".
You know, I also had to experience thousands of conversations before I could mimic a conversation. Maybe I'm just slow, but it took like a full year of dedicated practice before I could speak a single word. Eventually I experienced so many conversations and became so good at mimicking the conversations I had heard from others that I started having actual conversations.
Most anecdotes about AI getting things wrong are from GPT3. GPT4 still has flaws of course, just like people. This technology didn't exist at the beginning of the year.
Here's an example while learning linear algebra where I tried to throw it off:
I say> So I'll say A_{M x N} to mean a matrix of size M x N. Similarly, I'll also say x_{M x 1} to mean a vector of a certain length. The result of A_{M x N} x_{N x 1} is a linear combination of the columns in A. A has N columns. The result is a M x 1 matrix (or vector) in M dimensional space. Please be very brief, only confirming whether I am correct or incorrect until I explicitly ask for more details.
GPT4 says> Yes, your understanding is correct. The product of an M x N matrix A and an N x 1 vector x is indeed an M x 1 vector, and it is a linear combination of the columns of A.
<end quote>
But then I try to trick it, just for fun, and I very subtly change my prompt. See if you can spot the difference.
I say> So I'll say A_{M x N} to mean a matrix of size M x N. Similarly, I'll also say x_{M x 1} to mean a vector of a certain length. The result of A_{M x N} x_{N x 1} is a linear combination of the columns in A. A has N columns. The result is a N x 1 matrix (or vector) in N dimensional space. Please be very brief, only confirming whether I am correct or incorrect until I explicitly ask for more details.
GPT says> Almost correct. The result of A_{M x N} x_{N x 1} is actually an M x 1 vector (or matrix), not N x 1. The resulting vector lives in the column space of A, which is a subspace of RM, not RN.
<end quote>
I guess everyone can judge or themselves whether that's the result of a statistical model or genuine understanding. (And to be clear, the mathematical advice it's giving here is correct.)
This is tough. I believe there is a lot of unfair wealth concentration in our society, especially in the tech companies. On the other hand, I don't want AI to be stifled by bad laws.
If we try to stop AI, it will only take it away from the public. The military will still secretly use it, companies might still secretly use it. Other countries will use it and their populations will benefit while we languish.
Our only hope for a happy ending is to let this technology be free and let it go into the hands of many companies and many individuals (there are already decent models you can run on your own computer).
I think you underestimate the reasoning power of these AIs. They can write code, they can teach math, they can even learn math.
I've been using GPT4 as a math tutor while learning linear algebra, and I also use a text book. The text book told me that (to write it out) "the column space of matrix A is equal to the column space of matrix A times its own transpose". So I asked GPT4 if that was true and it said no, GPT disagreed with the text book. This was apparently something that GPT did not memorize and it was not just regurgitating sentences. I told GPT I saw it in a text book, the AI said "sorry, the textbook must be wrong". I then explained the mathematical proof to the AI, and the AI apologized, admitted it had been wrong, and agreed with the proof. Only after hearing the proof did the AI agree with the text book. This is some pretty advanced reasoning.
I performed that experiment a few times and it played out mostly the same. I experimented with giving the AI a flawed proof (I purposely made mistakes in the mathematical proofs), and the AI would call out my mistakes and would not be convinced by faulty proofs.
A standard that judged this AI to have "no understanding of any concepts whatsoever", would also conclude the same thing if applied to most humans.