That's assuming they have that goal. The goal of survival and reproduction exists because of natural selection (those that don't have that goal simply don't make it into the next generation, when competing against those that do).
But that doesn't necessarily apply to AI systems. At least while humans have a say in which systems survive and get developed further, and which ones get scrapped. When humans control the resources, the best way to get a sizable allocation of them is by being useful to humans (or at least making them believe that).
No, all you need for this is a digital signature and to publish the public key on an official government website. And maybe for platforms like YouTube and TikTok to integrate check status in their UI (e.g. flag any footage of candidates that was not signed by the government private key as "unverified").
However I’m not afraid of it taking my job because someone still needs to tell it what to do
Why couldn't it do that part too? - purely based on a simple high-level objective that anyone can formulate. Which part exactly do you think is AI-resistant?
I'm not talking about today's models, but more like 5-10 years into the future.
Not directly related, but you can disable chat history per-device in ChatGPT settings - that will also stop OpenAI from training on your inputs, at least that's what they say.
Not from memory, without looking at the original during painting - at least not to this level of detail. No human will just incidentally "learn" to draw such a near-perfect copy. Not unless they're doing it on purpose with the explicit goal of "learn to re-create this exact picture". Which does not describe how any humans typically learn.
They could, but even if they cut it to 0, assuming going from $226 million compensation (number I found for 2022, most of which by far is stock), and median employee costing $300k (I found a number around $270k median total comp, but the total cost is higher), that makes room for about 750 additional employees. Google has about 150k employees, so I think they're laying off more than that. Of course there are other highly paid people in other top positions there too, but the thing about a CEO is, you only need one of them. If we would only consider cash, the number of employees would be much smaller.
Basically they are Hollywood people who were all like: all the other Hollywood people we know, who we talked about it with, loved it (the producers, who would make content for it).
But they never really checked whether the consumers - the people who would be actually paying for the service - even liked it enough to pay for it.
I don't see the US restricting AI development. No matter what is morally right or wrong, this is strategically important, and they won't kneecap themselves in the global competition.
That's assuming they have that goal. The goal of survival and reproduction exists because of natural selection (those that don't have that goal simply don't make it into the next generation, when competing against those that do).
But that doesn't necessarily apply to AI systems. At least while humans have a say in which systems survive and get developed further, and which ones get scrapped. When humans control the resources, the best way to get a sizable allocation of them is by being useful to humans (or at least making them believe that).