People who think that LLMs having trouble with these questions is evidence one way or another about how good or bad LLMs are just don't understand tokenization. This is not a symptom of some big-picture deep problem with LLMs; it's a curious artifact like in a jpeg image, but doesn't really matter for the vast majority of applications.
You may hate AI but that doesn't excuse being ignorant about how it works.
Is it plausible this might cause the helicopter to crash somehow? I wouldn't want to risk that.
Edit: can't believe people be downvoting this. Y'all have no idea how catastrophic a helicopter crash could be. But if you're downvoting because you think the answer is obvious and I'm stupid, I respect that.
If you believe LLMs are not good at anything then there should be relatively little to worry about in the long-term, but I am more concerned.
It's not obvious to me that it will backfire for them, because I believe LLMs are good at some things (that is, when they are used correctly, for the correct tasks). Currently they're being applied to far more use cases than they are likely to be good at -- either because they're overhyped or our corporate lords and masters are just experimenting to find out what they're good at and what not. Some of these cases will be like chess, but others will be like code.
( not saying LLMs are good at code in general, but for some coding applications I believe they are vastly more efficient than humans, even if a human expert can currently write higher-quality less-buggy code.)
Eh, dislike this because outlets make claims about the future all the time based on what they think is most probable. I don't think that's lying in general. I would be more swayed that this is lying if they actually thought it was plausible that Trump would do this, and denied it.
You could have let this be a top-level comment, but you chose to make it directly a response to mine about how I'm interested in hearing about people who didn't always know. Why did you do that.
Kay but that's honestly a pretty awesome getup. It's only cringe in retrospect knowing what we know now. If I saw someone like this on the street I'd think, "slay, dude."
Who's Rachel Greene? But we all know Harvard and have an idea of their respectability. Name of the researcher if not well-known should be in the body instead.
Using an LLM as a chess engine is like using a power tool as a table leg. Pretty funny honestly, but it's obviously not going to be good at it, at least not without scaffolding.
"Lied" means intentionally deceived. I don't know anything about this outlet, but what reason do we have to believe they lied? Perhaps they were themselves deceived.
Edit: people be downvoting me just because I'm ignorant and asking in good faith. We have to fix this culture on lemmy where all questions are assumed to be "just-asking-questions" style trolling.
The fact that you don't take other people's opinions into account for desire may suggest you lack Theory of Mind. You should get that looked into. Other people's dispositions toward something are one of the best heuristics available for quickly learning about that thing.
If you mean "should" from a moral point of you, I don't think it should. What's moral doesn't change based on your physical location. Or at least, not in my moral framework.
I was expecting you to have a much deeper connection than just "it's the same tactics." Like yeah obviously if you learn something in one place you can apply it to another similar situation.
Oh -- sorry, I understand the protests are about immigration, which is not exactly a domestic issue. I'm talking about the deployment of the national guard.
People who think that LLMs having trouble with these questions is evidence one way or another about how good or bad LLMs are just don't understand tokenization. This is not a symptom of some big-picture deep problem with LLMs; it's a curious artifact like in a jpeg image, but doesn't really matter for the vast majority of applications.
You may hate AI but that doesn't excuse being ignorant about how it works.