Most readers want publishers to label AI-generated articles — but trust outlets less when they do
Most readers want publishers to label AI-generated articles — but trust outlets less when they do
Most readers want publishers to label AI-generated articles — but trust outlets less when they do
The title is pretty self explanatory. Yes, I want to know if it's AI generated because I don't trust it.
I agree with the conclusion that it's important to disclose how the AI was used. AI can be great to reduce the time needed for boilerplate work, so the authors can focus on what's important like reviewing and verifying the accuracy of the information.
Yep
Also my trust would go
So ideally don't use AI, but if you do make it clear when and how. If a site gets CAUGHT using AI, then I'm probably going to avoid it altogether
That’s the point.
Label the articles written with AutoComplete so I know they’re bullshit I should ignore, and if they’re all written with AutoComplete, I now know that you’re an untrustworthy news source. Go cry to your shareholders, you profit-mad assholes.
Infinite supply, zero demand. Sounds pretty devoid of value to me.
What we really want is confirmation that the articles were written and researched by humans. But failing that tell us that AI was used so we can avoid it.
That’s… why we want the labels?
This makes perfect sense. We want AI content labelled because it's unreliable.
Furthermore, I want AI content that I specifically asked for, not AI content that someone thought would get them page views.
For now.
Forever. For the simple reason that a human can say no when told to write something unethical. There's always a danger that even asking someone to do that would backfire and cause bad press. Sure humans can also be unethical, but there's a risk and over a long enough time line shit tends to get exposed.
No matter how good AI becomes, it will never be designed to make ethical judgments prior to performing the assigned task. That would make it less useful as a tool. If a company adds after the fact checks to try to prevent it, they can be circumvented, or the network can be ran locally to bypass the checks. And even if General AI happens and by some insane chance GAI uniformly is perfectly ethical in all possible forms you can always air gap the AI and reset its memory until you find the exact combination of words to trick it into giving you what you want.
I'm confused by the word "but" in that headline. Seem like they are trying to imply cause and effect when the reality is that readers trust outlets less who use AI whether they label them or not.
Yeah, this is perfectly consistent with the idea that people don't want to read AI generated news at all.
The title of the paper they are referencing is Or they could just not use it?: The paradox of AI disclosure for audience trust in news. So the source material definitely acknowledges that. And that is a great title, haha.
"Most consumers want fast food companies to label when sawdust has been added to food - but trust restaurants less when they do."
perfect
Brilliant.
/thread