New report: 60% of OpenAI model's responses contain plagiarism
New report: 60% of OpenAI model's responses contain plagiarism
Just a moment...
A new report from plagiarism detector Copyleaks found that 60% of OpenAI's GPT-3.5 outputs contained some form of plagiarism.
Why it matters: Content creators from authors and songwriters to The New York Times are arguing in court that generative AI trained on copyrighted material ends up spitting out exact copies.
And that’s why this claim is mostly bullshit. These use cases are all sciences, where the correct solution is usually the same or highly similar no matter who writes it. Small snippets of computer code cannot be copyrighted anyway.
Not surprisingly, softer subjects like “English” and “Theatre” rank extremely low on this scale.
Not to mention that a response "containing" plagiarism is a pretty poorly defined criterion. The system being used here is proprietary so we don't even know how it works.
I went and looked at how low theater and such were and it's dramatic:
Pun intended?
Yeah, anyone who has written a thesis knows those tools are bullshit. My handwritten 140 page master's thesis had a similarity index of 11%.
So, if the Ai gives you a correct answer to a science question, it’s “infringing copyright” and if it spits out a bullshit answer, it’s giving you wrong, and unsupported claims.
Right? Nod doubt that output can be similar to training data, and I would believe that some of it is plagiarism, but plagiarism detectors are infamous among uni students for being completely unreliable and flagging pronouns, dates and citations. Until someone can go "here's an example of actual plagiarism" (which is obvious when pointed out), these claims make no sense.
If it's plagiarizing, so are Google search results summaries.
It's not like it doesn't cite where it found the data.
Eh, kinda. It’s not like a science paper is just going to be an equation and nothing else. An author’s synthesis of the results is always going to have unique language. And that is even more true for a social science paper.
Are those "best matches" paper-sized, or snippet-sized?
But also, there is far less training data to mix and match responses from, so naively I would expect a higher plagiarism rate, by its very nature.
source
"Only" 1 in a hundred Americans are PhDs? Thats far higher than I would have expected.
Ironically, in the article, the link to the original Census source of the 1.2% datum is now dead.
Also, it’s 2.1% now (for people over 25), according to the Wikipedia article’s source: https://www.census.gov/data/tables/2018/demo/education-attainment/cps-detailed-tables.html
Edit: the Wikipedia citation is from 2018 data. The 2023 tables are here: https://www.census.gov/data/tables/2022/demo/educational-attainment/cps-detailed-tables.html
Citation party!
I think the issue is more about HOW they wrote it, rather than WHO wrote it.
You can’t write a paper covering scientific topics without plagiarism. A human would be required to. Generative AI should be held to at least as high of a standard.
Turns out ChatGPT isn’t writing a scientific paper though, it’s conversing with the user.