That was only my first point. In my second and third point I explained why education is not going to solve this problem. That's like poisoning their candy and then educating them about it.
I'll add to say that these AI applications only work because people trust their output. If everyone saw them for the cheap party tricks that they are, they wouldn't be used in the first place.
The fact that they lack sentience or intentions doesn't change the fact that the output is false and deceptive. When I'm being defrauded, I don't care if the perpetrator hides behind an LLM or not.
It's rather difficult to get people who are willing to lie and commit fraud for you. And even if you do, it will leave evidence.
As this article shows, AIs are the ideal mob henchmen because they will do the most heinous stuff while creating plausible deniability for their tech bro boss. So no, AI is not "just like most people".
Ok, so your point is that people who interact with these AI systems will know that it can't be trusted and that will alleviate the negative consequences of its misinformation.
The problems with that argument are many:
The vast majority of people are not AI experts and do in fact have a lot of trust in such systems
Even people who do know often have no other choice. You don't get to talk to a human, it's this chatbot or nothing. And that's assuming the AI slop is even labelled as such.
Even knowing that the information can be misleading does not help much. If you sell me a bowl of candy and tell me that 10% of them are poisoned, I'm still going to demand non-poisoned candy. The fact that people can no longer rely on accurate information should be unacceptable.
Congratulations, you are technically correct. But does this have any relevance for the point of this article? They clearly show that LLMs will provide false and misleading information when that brings them closer to their goal.
Sure. For the fact that many jurisdictions outside of the US also consider freedom of speech and other human rights to apply between private parties: this is called "horizontal effect" and covered extensively in case law by e.g. the European Court of Human Rights. See also this chapter for an international comparison and this paper for a European perspective.
As for the specific rules in the EU for platforms: Article 17 of the Digital Services Act requires that users who are banned or shadowbanned from any platform are provided with specific information of what rule they broke, which they can then appeal internally or in court. Article 34 and 35 requires very large platforms (such as X) to take broad measures to protect i.a. the users' freedom of speech.
More to the point, one person who was shadowbanned by X in a similar way used the DSA and won in court
The EU recognizes that human right such as freedom of speech also should be protected against private parties. Platforms can't ban or restrict you for arbitrary reasons here.
I'm of the opinion that having a lot of money shouldn't, in fact, allow you to do what you want. No person should have this power to do mass censorship, not in the last place because manipulating online discourse means manipulating a fundamental aspect of democracy.
Musk specifically is meddling in elections, both in the EU and the US by e.g. bribing voters. Turning the dials of the algorithm lets him do this even more effectively.
NASA still foots the bill either way. In this arrangement, the cost of development is simply included in the price of the product plus a fixed profit margin. Such 'cost-plus' contracts are criticized because it eliminates competing for efficiency and incentivises contractors to make their solutions as complicated and expensive as possible.
if we take it as true that light speed is the same in every direction
This is the crucial assumption, that to my knowledge hasn't been proven or disproven. Because the alternative, light goes faster in one particular direction, is also perfectly consistent with everything. And if you're moving atomic clocks, correcting for time dilations requires you to make assumptions about the one-way speed of light (which we only know from measuring roundtrip times)
That's just shifting the problem. There is no known way to reliably sync remote clocks except by sending packets and assuming the round-trip time is symmetrical. This is a known problem in physics: https://en.wikipedia.org/wiki/One-way_speed_of_light
Don't worry, DOGE will just fire the investigators before that happens.