I'm glad someone is fighting the good fight. It's becoming more and more obvious that the prevalence of these tool in academic circles may cause more harm than good.
It's a great dream, leaving. I'm not rich enough to make that happen, though. Maybe once things get worse, I can become a refugee- but, it's more likely that I'll cling to life here until it's impossible to continue.
There's a conversation that could be had about how there are no truly public platforms on the web. Ultimately, everywhere you can speak is owned by someone, and any community you build exists at their mercy. This can exert a lot of pressure on a community's standards and beliefs, and when I started using the internet, abusing this was a major faux pas.
However, that conversation requires a lot of nuance and patience. You are kind of transparently posting this in response to a moderator in another community removing your posts. If you'd like to complain about that, there's actually a community specifically for that.
By the by, free speech complaints have become strongly associated with certain political movements as dog whistles. You might want to look into that and make sure you want to present that image.
I understand. I grew up a fundamentalist Pentecostal. It's taken a lot of time and growth to move past that, and I've been an ass and had to make up for it.
My problem is mostly that celebrities have a lot of influence and power that they don't treat with the proper level of respect. If you have an audience of millions, you should consider the example you set. It's part of the price of choosing to be a celebrity as your job.
This guy is responsible for contributing to a lot of cultural miasma- making up for that takes more effort than apologizing. It requires actual growth and an effort to make amends. You have to not just change, but try to fix the things you broke and help the people you hurt.
A lot of celebrities will performatively apologize, but not do anything and that's really annoying.
Do you have any evidence that he's made an effort to make up for his behavior that goes beyond words? This is an honest question, I never followed him because of random happenstance, so I don't know.
Fancy savings account for retirement that's stored in stocks so it can explode at any point. Basic perquisite to ever retire in the US. Many people don't have them.
Ideally or practically? Those are very different conversations.
Practically, there's not a lot that can be done. In the US, there's not a good way for someone like that to continue living.
I also will note that the phrasing of your last two sentences is kind of unpleasant. I'm not sure if that's your intent, but it creates this implication that your value as a worker is the major contributed to your value to society. I don't think that's the case- I think it's possible for someone to not work and contribute a lot to the happiness and well-being of a local community. Also, part of the thing that makes humans special is that even if someone doesn't contribute to the overall needs of society, we will still take care of them out of love. That we love other people is a sufficient foundation for their existence.
We have so little time on this beautiful earth and the writer of this article chose to focus on this instead of how beautiful butterflies are, or something lovely and true.
I'm not sure why so many people begin this argument on solid ground and then hurl themselves off into a void of semantics and assertions without any way of verification.
Saying, "Oh it's not intelligent because it doesn't have senses," shifts your argument to proving that's a prerequisite.
The problem is that LLM isn't made to do cognition. It's not made for analysis. It's made to generate coherent human speech. It's an incredible tool for doing that! Simply astounding, and an excellent example of the power of how a trained model can adapt to a task.
It's ridiculous that we managed to get a probabilistic software tool which generates natural language responses so well that we find it difficult to distinguish them from real human ones.
...but it's also an illusion with regards to consciousness and comprehension. An LLM can't understand things for the same reason your toaster can't heat up your can of soup. It's not for that, but it presents an excellent illusion of doing so. Companies that are making these tools benefit from the fact that we anthropomorphize things, allowing them to straight up lie about what their programs can do because it takes real work to prove they can't.
Average customers will engage with LLM as if it was a doing a Google search, reading the various articles and then summarizing them, even though it's actually just completing the prompt you provided. The proper way to respond to a question is an answer, so they always will unless a hard coded limit overrides that. There will never be a way to make a LLM that won't create fictitious answers to questions because they can't tell the difference between truth or fantasy. It's all just a part of their training data on how to respond to people.
I've gotten LLM to invent books, authors and citations when asking them to discuss historical topics with me. That's not a sign of awareness, it's proof that the model is doing what it's intended to do- which is the problem, because it is being marketed as something that could replace search engines and online research.
I'm starting to set up the groundwork to do freelance work. Not sure how well I'll do, but it's a strong step towards a happier me.