A cheat sheet for why using ChatGPT is not bad for the environment
A cheat sheet for why using ChatGPT is not bad for the environment

A cheat sheet for why using ChatGPT is not bad for the environment

A cheat sheet for why using ChatGPT is not bad for the environment
A cheat sheet for why using ChatGPT is not bad for the environment
this reeks of AI slop
No it doesn't
Every time I see a comment like this I lost a little more faith in lemmy
ChatGPT
Arm yourself with knowledge
Bruh
I struggle to see why numerous scientists (and even Sam 'AI' Altman himself) would be wrong about this but a random substack post holds the truth.
Having read the entire post, i think there's a misunderstanding :
Even Sam Altman acknowledged last year the huge amount of energy needed by chatgpt, and the need for a breakthrough in energy breakthrough...
Do you hold Sam Altman's opinion higher than the reasoning here? In general or just on this particular take?
A cheat sheet on how to argue your passion positive.
I'm not familiar with the term
Username checks out
🆗
ChatGPT energy costs are highly variable depending on context length and model used. How have you factored that in?
This isn't my article and yes that's controlled for
tl/dr: "Yes it is, but not as much as other things so stop worrying."
What a bullshit take.
What makes this a bullshit take? Focusing attention on actual problems is a great way to make progress
I was very sceptical at first, but this article kinda convinced me. I think it still has some bad biases (it often only considers 1 chatgpt request in its comparisons, when in reality you quickly make dozens of them, it often says 'how weird to try and save tiny amounts of energy' when we do that already with lights when leaving rooms, water when brushing teeths, it focuses on energy (to train, cool and generate electricity) and not on logistics and hardware required), but overall two arguments got me :
Still probably cant hurt to boycott that stuff, but it'd be more useful to use less social media, especially those with videos or pictures, and watch videos in 140p
Self-hosted LLMs are the way.
Oof, ok, my apologies.
I am, admittedly, "GPU rich"; I have ~48GB of VRAM at my disposal on my main workstation, and 24GB on my gaming rig. Thus, I am using Q8 and Q6_L quantized .gguf
files.
Naturally, my experience with the "fidelity" of my LLM models re: hallucinations would be better.
I actually think that (presently) self hosted LLMs are much worse for hallucination
Is environmental impact on the top of anyones list for why they don't like ChatGPT? It's not on mine nor on anyones I have talked to.
The two most common reasons I hear are 1) no trust in the companies hosting the tools to protect consumers and 2) rampant theft of IP to train LLM models.
The author moves away from strict environmental focus despite claims to the contrary in their intro,
[...]
... yet doesn't address the most common criticisms.
Worse, the author accuses anyone who pauses to think of the negatives of ChatGPT of being absurdly illogical.
IDK what logical fallacy this is but claiming people are "freaking out over 3Wh" is very disingenuous.
Rating as basic content: 2/10, poor and disingenuous argument
Rating as example of AI writing: 5/10, I've certainly seen worse AI slop
My reason is that you can't trust the answers regardless. Hallucinations are a rampant problem. Even if we managed to cut it down to 1/100 query will hallucinate, you can't trust ANYTHING. We've seen well trained and targeted AIs that don't directly take user input (so can't be super manipulated) in google search results recommending that people put glue on their pizzas to make the cheese stick better... or that geologists recommend eating a rock a day.
If a custom tailored AI can't cut it... the general ones are not going to be all that valuable without significant external validation/moderation.
There is also the argument that a downpour of AI generated slop is making the Internet in general less usable, hurting everyone (except the slop makers) by making true or genuine information harder to find and verify.
Basically no. What you're calling tailored AI is actually low cost AI. You'll be hard pressed, on the other hand, to get ChatGPT o3 to hallucinate at all
Thank you for your considered and articulate comment
What do you think about the significant difference in attitude between comments here and in (quite serious) programming communities like https://lobste.rs/s/bxixuu/cheat_sheet_for_why_using_chatgpt_is_not
Are we in different echo chambers? Is chatgpt a uniquely powerful tool for programmers? Is social media a fundamentally Luddite mechanism?
I'm curious if you can articulate the difference between being critical of how a particular technology is owned and managed versus being a Luddite?
I would say GitHub copilot ( that uses a gpt model ) uses more Wh than chatgpt, because it gets blasted more queries on average because the "AI" autocomplete just triggers almost every time you stop typing or on random occasions.