I think the author was quite honest about the weak points in his thesis, by drawing comparisons with cars, and even with writing. Cars come at great cost to the environment, to social contact, and to the health of those who rely on them. And maybe writing came at great cost to our mental capabilities though we've largely stopped counting the cost by now. But both of these things have enabled human beings to do more, individually and collectively. What we lost was outweighed by what we gained. If AI enables us to achieve more, is it fair to say it's making us stupid? Or are we just shifting our mental capabilities, neglecting some faculties while building others, to make best use of the new tool? It's early days for AI, but historically, cognitive offloading has enhanced human potential enormously.
"Shoot for the moon, and if you miss you'll end up drifting aimlessly until you die" doesn't sound as good, but probably works just as well as an analogy
Why so pessimistic? With any luck brainchips will mean the end of annoying adverts once and for all. You'll just feel an unexpected desire to acquire certain products. And maybe crippling headaches or a nauseating feeling of unease if you ignore these urges
My 5 year old kid likes to say this, when I've finally imposed some consequences for doing something she shouldn't be doing after she's ignored my repeated requests to stop doing it. She usually says it while still doing the thing that caused the issue in the first place.
Why would you not want to visit a fascist hell-hole where human rights don't exist, the rule of law has broken down, and you can be abducted, imprisoned without charge or deported at any moment just for the crime of being foreign?
One development we may see imminently is the infiltration of any areas of the internet not currently dominated by AI slop. When AI systems are generally able to successfully mimic real users, the next step would be to flood anything like Lemmy with fake users, whose purpose is mainly to overwhelm the system while avoiding detection. At the same time they could deploy more obvious AI bots. Any crowdsourced attempt at identifying AI may find many of its contributors are infiltration bots who gain trust by identifying and removing the obvious bots. In this way any attempt at creating a space not dominated by AI and controlled disinformation can be undermined
Or kill just half of them and then, as a compromise, only kill half of the ones who are left (repeat until total remaining Palestinians is less than 1)
Oh great, thanks for that suggestion. 6 months from now when airlines bring out the "no seat just a rope" economy option, we'll know who to blame.