We don't even know how they arrive at the output they arrive at, and it takes lengthy research just to find out how, say, an LLM picks the next word in an arbitrarily chosen sentence fragment. And that's for the simpler models! (Like GPT-2)
That's pretty crazy when you think about it.
So, I don't think it's fair to suggest they're just "a new type of app". I'm not sure what "revolutionary" really means but the technology behind the generative AI is certainly going to be applied elsewhere.
It's anecdotal but I have found that the people who are "skeptical" (to use your word) about generative AI often turn out to be financially dependent on something that generative AI can do.
That it to say, they're worried it will replace them at their job and so they very much want it to fail.
This is probably right. LLMs can be used as a replacement for people (well, almost), or it can be used as a tool for people. Where that line is will be crucial.
I also don't think it's the same kind of """AI""" as the kind that would be used to recreate a person's likeness. That's almost certainly going to be covered under copyright. (I bring this up because the article mentions it).
And even if there somehow is no line and any script written even partially by an AI cannot be copyrighted (unlikely I think) then the resulting film is still eligible for copyright protections.
This conspiracy theory doesn't make much sense to me. If they wanted to they could have just gave him probation and sent him on his way, right? Why go through this convoluted path of charging him and convicting him, and let him go?
Am I misunderstanding you? It feels like I'm misunderstanding you.
I'm not sure your second point is as strong as you believe it to be. Do you have a specific example in mind? I think most vehicle problems that would require an emergency responder will have easy access to a tow service to deal with the car with or without a human being involved. It's not like just because a human is there that the problem is more easily solved. For minor-to-moderate accidents that just require a police report, things might get messy but that's an issue with the law, not necessarily something inherently wrong with the concept of self driving vehicles.
Also, your first point is on shaky ground, I think. I don't know why the metric is accidents with fatalities, but since that's what you used, what do you think having fewer humans involved does to the chance of killing a human?
I'm all for numbers being crunched, and to be clear (as you were, I think) the numbers are the real deciding metrics here, not thought experiments.
And I think it's 100% true that autonomous transportation doesn't have to be perfect, just better than humans. Not that you disagree with this, but it is probably what people are thinking when they say "humans do this too".
And speaking of this weak ass defense you've got going, let's take it one step higher. Do you think developing countries should use America as the shining example of what to be? Surely there are better countries in the world to strive to emulate than America.
Isn't this the point of technology?