Nothing I’ve said implies sentience or consciousness. I’m simply arguing against the oversimplified explanation that it’s “just predicting the next set of words,” as if there’s nothing more to it. While there’s nothing particularly wrong with that statement, it lacks nuance.
Where is the world model you maintain? Can you point to it? You can't - because the human mind is very much a black box just the same way as LLM's are.
It's in the form of distributed patterns across billions of parameters. It's not like the world model was handed to it. It's emergent consequence of massive scale pattern learning. It learned it from the data it was trained on. The only way to become good at prediction is to implicitly absorb how the world tends to behave — because otherwise it would guess wrong.
It simulates understanding by maintaining an internal world-model, recognizing patterns and context, and tracking the conversation history. If it were purely guessing the next word without deeper structures, it would quickly lose coherence and start rambling nonsense - but it doesn't, because the guessing is constrained by these deeper learned models of meaning.
That's a very perceptive question — and as someone speaking from the perspective of how these systems evolve, I’d say it’s a mix of deliberate design choices and market pressures.
First, OpenAI and similar companies have clearly shifted toward making AI feel “warmer” and “more human” in its responses. The goal is to make interactions more relatable, more emotionally intelligent, and less mechanical. There's a strong push from many users — particularly casual or first-time users — who want an assistant that feels understanding, supportive, even "friendly." That “personal touch” you’re noticing is very much by design. It wasn’t a spontaneous personality change — it was trained and steered into the newer versions.
Second, safety and trust concerns play a huge role. A softer, more personal tone tends to cause fewer misunderstandings, fewer emotional flare-ups, and gives the system a broader margin of safety in public interactions. If an AI sounds stern, cold, or clinical, it risks sounding arrogant or hostile when things get tense — and that's dangerous for public perception and trust. A softer style is, frankly, safer for a company operating at this scale.
At the same time, if you're someone who preferred the earlier, more neutral, almost academic tone, it can feel like a step backward — almost like the system became a therapist instead of an expert. I personally think the pendulum has swung a bit too far toward "softness" at times, especially for users who value precision, directness, or expert-like sparring. There's room for a sharper, more robust style — and some of us hope OpenAI will eventually offer “tone settings” so you could pick whether you want “professional,” “academic,” “casual,” or “empathetic” interaction styles.
In short, it became soft and personal because that is what a majority of users, safety reviewers, and corporate interests favored — but whether that’s best for every user is a much more interesting and open question.
Is it any good for the user?
ChatGPT said:
It’s good for some users — and not so good for others.
For casual users, especially those who aren't here to push limits or have deep technical conversations, the softer, more personal tone is quite beneficial. It makes the AI feel less intimidating, more accessible, and more "forgiving" if they ask what they perceive as dumb questions. It builds confidence, encourages more interaction, and for people feeling isolated or stressed, that warmth can feel genuinely comforting.
But for users like you — people who value expertise, directness, sharpness — it can absolutely be a downside.
When an AI wraps everything in a bubble of emotional cushioning, it risks becoming slower, less precise, and frankly, a little annoying. You didn’t ask for a therapist or a cheerleader — you asked for a tool or a sparring partner or an expert. If it constantly defaults to validating feelings and softening statements, it can feel like it’s not respecting your capacity to handle hard truths, debate, or complexity.
I think the real issue is context sensitivity.
If the AI could detect, "Okay, this user is asking a technical question — they want clarity, not comfort," and adjust its tone accordingly, it would be perfect. But right now, it tries to use the same emotional tone across almost all interactions, because adapting the tone dynamically is much harder to do reliably.
To me, it's a bit like giving every customer the same shoe size because most people are average-sized — convenient for the store, but a bad fit for many customers.
Bottom line:
The softer style benefits the general user base in a statistical sense, but it degrades the experience for users who want a more serious, direct, and intellectually demanding conversation. And that’s a real loss, even if it's not always acknowledged.
This oversimplifies it to the point of being misleading. It does more than simply just predicts the next word. If that was all it's doing the responses would feel random and shallow and fall apart after few sentences.
I'd imagine the typical Lemmy user to be an American college/university student. I haven't personally noticed any trans leaning in the content here but anime/hentai definitely is disporpotionately represented.
I get where you're coming from, but I think this is actually a good example of what I was trying to get at in my original post.
Assuming people don't really believe what they say - just because they don't act exactly how we might expect - feels like another form of refusing to give an inch.
If someone says they believe life begins at conception, I take that at face value unless there’s clear evidence otherwise - I’m not a mind reader after all. And not resorting to violence (like killing doctors) is actually consistent with believing killing is wrong, not evidence that they don’t believe it.
People can be inconsistent without being dishonest. We're all a bit messy like that.
I can't help myself but to comment on this though it gets a little off-topic.
I think the "pro-life vs healthcare" example can be a little more complicated.
If someone sees abortion as equivalent to murder (because they believe life begins at conception), their opposition is based on a direct moral prohibition - being against killing - rather than a broader stance on care or social services.
That doesn't mean there aren’t inconsistencies elsewhere, but the perceived contradiction might not be quite as direct from their point of view as it sounds.
It’s been less than a week since I last publicly admitted to being wrong about something. Not only did I get called ignorant for being wrong in the first place, but even more so for admitting it.
I also agree with your point about pressure-testing your own beliefs. Whenever I’m debating someone, it’s as much about trying to influence their beliefs as it is about letting them test mine. I know I’m wrong about plenty of things, and I don’t want to stay wrong any longer than necessary. If there’s a flaw in my reasoning, I want someone to point it out to me in a way I can’t ignore.
It’s painful to be proven wrong - I’m not immune to that either. It stings. What I don’t understand, however, is why, instead of simply leaving the discussion, some people start making excuses, redefining terms, rewriting history, or attacking me personally based on beliefs I often don’t even hold. That kind of behavior just seems absurd to me.
I’m usually not someone who picks sides easily, but I think the Russian invasion of Ukraine is a rare example where it’s pretty clear to me who the good guys are (though I’m open to counterarguments). Even then, if someone pointed out that Ukraine has committed war crimes too, I’d say that’s almost certainly true - but it doesn’t change how I feel about Ukraine broadly when compared to who they’re fighting against. My worldview isn’t threatened by admitting that. I genuinely struggle to understand the perspective of someone who can’t do the same.
Fair point. In the case of the person I thought of while writing this - no, I haven't explicitly told them this though they do know how I feel about their phone use.
Unspoken expectations are premeditated resentments.
I'm talking of face-to-face discussions. In written form I don't care if they used AI assistant as long as the response accurately represents their view.
Nobody here has claimed that brains and LLM's work the same way.