I haven’t lost anything, I’m just not agreeing with you.
I think that if a person is suffering from mental issues that they can get the justification for their delusions regardless of AI. While it does provide some immediate access to information that they may interpret unhealthily, it is not unlike participating in social media within an echo chamber—which I would argue does more damage.
I will give you one thing though… I think more publicly available (ChatGPT) AI models need to cut off topics at a certain point and just refuse to go any further without forcefully inserting warning messages about getting professional help—but we could say the same thing about social media, haha.
I think a heat map might disagree with that on a red vs blue state level, but if that map was specifically by income, I agree it would be higher in poor areas.
How to word this… poor people are killed more than rich people, but poor people are killed more often in red states than in blue. Something like that.
I mean your life is like dedicated to an obsession of the USA. It’s really unhealthy. Everything you do is focused on this schadenfreude with America. Meanwhile, ignoring the fascism growing in Europe because you aren’t really about being anti-fascist, you’re about dopamine.
So today I had it do a bunch of fractional math on dimensional lumber at the hardware store. While it was doing this math for me it asked if this was for the guitar project I was working on in another chat, where I was mostly asking about magnetic polarity and various electronic, and yes it was. So then it made a different suggestion for me, which made a big impact on what I bought. I know that’s vague, but it was a long conversation.
Then, when I got home my neighbor had left a half dead plant on my stoop because I’m the neighborhood green thumb apparently. I had never seen this plant before. Took a photo, sent it to AI, and it told me what it was (yes, with sources).
Then while I was 3d modeling some shelf brackets, it helped my design by pointing out a possible weight distribution issue. While correcting that, I was able to reduce material usage by like 30%.
I don’t see any of that as “delusional”
But to the topic at hand, I think the conversations groups and pairs of humans have, both online and real life, will always be more damaging that what a single person can trick a computer into saying.
And by tricking it… you are abusing a tool designed for a different purpose. So, kitchen knives. Not meant to be murder weapons, certainly can be used for that purpose. Should we blame the knife?
All the rich have to do is keep raising rent, keep home prices high, and continue lobbying to take our rights away… so far, they have been wildly successful with zero evidence of it slowing.
The price I’m paying now for a 2 bedroom apartment in the suburbs in a “pretty good” complex is $2200/m. It’s not fancy, just average.
10 years ago, in the same area, I rented a three story, three bedroom, two car garage townhome for $1500.
I am making about the same amount of money now than I was then. I’ll take that amount, because more isn’t possible to find right now.
Oh, well that explains everything, you are using it wrong.
A lot of people think that you’re supposed to argue with it and talk about things that are subjective or opinion based. But that’s ridiculous because it’s a computer program.
ChatGPT and others like it are calculators. They help you brainstorm problems. Ultimately, you are responsible for the outcome.
There’s a phrase I like to use at work when junior developers complain the outcome is not how they wanted it: shit in shit out.
So next time you use AI, perhaps consider are you feeding it shit? Because if you’re getting it, you are.
Do you mean without stepping foot in Florida? Probably nothing. And even then, it would take a Rambo style frontal assault.
Or, join the government and have a meeting about a meeting to decide if you’re going to have a meeting to decide if this is a situation that requires an action meeting.
I talked to a user yesterday who thought it would be better if no photos or videos got released because then people would be more upset at the unknown.
I haven’t lost anything, I’m just not agreeing with you.
I think that if a person is suffering from mental issues that they can get the justification for their delusions regardless of AI. While it does provide some immediate access to information that they may interpret unhealthily, it is not unlike participating in social media within an echo chamber—which I would argue does more damage.
I will give you one thing though… I think more publicly available (ChatGPT) AI models need to cut off topics at a certain point and just refuse to go any further without forcefully inserting warning messages about getting professional help—but we could say the same thing about social media, haha.