My AI Skeptic Friends Are All Nuts
My AI Skeptic Friends Are All Nuts

My AI Skeptic Friends Are All Nuts

My AI Skeptic Friends Are All Nuts
My AI Skeptic Friends Are All Nuts
My AI-enjoying friends don't exist.
The stolen training data issue alone is enough to make the use of AI in business settings unethical. And until there's an LLM that is trained on 100% authorized data, selling a product developed with AI is outrught theft.
Of course there's also the energy use issue. Yeah, congrats, you used as much energy as a plane ride to generate something you could have written with your own brain with a fraction of the energy.
From a technical or legal perspective, copyright infringement is not theft. The relationship a copyright holder has with a work is of a completely different character than actual ownership. See Dowling v. United States (1985).
Whether or not "AI" training constitutes copyright infringement is, as far as I know, still up in the air. And, while I believe most of us can agree that actual theft is unethical, the ethics of copyright infringement are as far as I know also very debatable.
Disclaimer - not an uncritical supporter of "AI."
The energy use argument hasn't been true for a while now. https://wccftech.com/m3-ultra-chip-handles-deepseek-r1-model-with-671-billion-parameters/
Meanwhile, corps clearly don't care about IP here and will keep developing this tech regardless of how ethical it is. Seems to me that it's better if there are open model available and developed by the community than there only being closed models developed by corps who decide how they work and who can use them.
[Linked article] M3 Ultra Runs DeepSeek R1 With 671 Billion Parameters Using 448GB Of Unified Memory, Delivering High Bandwidth Performance At Under 200W Power Consumption, With No Need For A Multi-GPU Setup
Running the AI is not where the power demand comes from, it's training the AI. Which, if you trained it only once it wouldn't be so bad, but obviously every AI vendor will be training all the time to ensure their model stays competitive. That's when you get into the tragedy of the commons situation where the collective power consumption goes out of control for tiny improvements in the AI model.
Meanwhile, corps clearly don’t care about IP here and will keep developing this tech regardless of how ethical it is.
"It will happen anyway" is not an excuse to not try to stop it. That's like saying drug dealers will sell drugs regardless of how ethical it is so there's no point in trying to criminalize drug distribution.
Seems to me that it’s better if there are open model available and developed by the community than there only being closed models developed by corps who decide how they work and who can use them.
Except there are no truly open AI models because they all use stolen training data. Even the "open source" models like Mistral and DeepSeek say nothing about where they get their data from. The only way for there to be an open source AI model is if there was a reputable pool of training data where all the original authors consented to their work being used to train AI.
Even if the model itself is open source and free to run, if there are no restrictions against using the generated data commercially, it's still complicit in the theft of human-made works.
A lot of people will probably disagree with me but I don't think there's anything inherently wrong with using AI generated content as long as it's not for commercial purposes. But if it is, you're by definition making money off content that you didn't create which to me is what makes it unethical. You could have hired that hypothetical person whose work was used in the AI, but instead you used their work to generate value for yourself while giving them nothing in return.
Yeah I mean all that is basically true -- for code. The tools work, if you know how to work them.
Of course, is this going to put programmers out of work? yes. Is it profitable for the same companies that are causing unemployment in other fields? Yes. So like, it's not as though there isn't blood on your hands.
TBH "AI" is going to create more jobs for devs. I tried using "AI" once for coding. It took me more time to debug the code than to google an alternative solution. It might be good for boilerplates, summarizing stack exchange, etc. But in reality you can't code anything original or worthwhile with statistics.
If you use LLMs, you should use it primarily in ways where it's easier to confirm the output is valid than it is to create the output yourself. For instance, "what API call in this poorly-documented system performs
<some task>
?"There is no consistent definition of AI so you might as well drop the quotation marks, lest you be prescriptivist.