The problem when it comes to the current situation in the US, is that these protests already came baked in to the Project 2025 plan from the start.
They're not going to change their minds on anything as a result of the protests because they already knew there'd be mass protests before Trump signed a single order.
I did a 'download all your data' on Facebook a while back and there wasn't anything about my tracked browser history. Does this mean they've also violated the "users should be able to see the data you have on them" article of the GDPR as well?
I'm guessing they're trying to hide behind weasel shit about the ids being anonymized or something as though it wasn't trivially easy for them to deanonymize....
The fact that there's no buckets means that you can't then usefully draw any further conclusions about the ratio of buckets to things. In your first two examples we can take the results and use them to work out further things like how much might the buckets weigh, what happens if we add more buckets or more things, etc.
In the divide by zero answer, we know nothing about the buckets, and the number of things becomes meaningless. But worse of all is that it's easy to hide this from the unwary, which is why you occasionally see "proofs" online that 1=2, which rely on hiding divide-by-zero operations behind some sneaky algebra.
When we say we "can't" divide by zero, we mean ok you can divide by zero, but you'll get a useless answer that leaves you at a mathematical dead end. Infinity isn't reversible, or even strictly equal to itself.
I'm amazed at how after 50 years, over 100,000 top-tier software engineers, and $3,500,000,000,000, Microsoft are still so bad at making operating systems.
It's almost as if Capitalist rhetoric about innovation is bullshit.
In the case of Air Canada, the thing the chatbot promised was actually pretty reasonable on its own terms, which is both why the customer believed it and why the judge said they had to honour it. I don't think it would have gone the same way if the bot offered to sell them a Boeing 777 for $10.
Yeah, I always found it weird how chatbots were basically a less efficient and less reliable way to access data that's already on the website but all the companies were racing to get one. People kept telling me that I'm in the minority in being able to find information on a webpage, but I suspect the sort of people who are too dumb to do that aren't going to have much better luck dealing with the quirks and eccentricies of a chatbot either.
You in turn are overestimating how much effort is required for an established bot farm to add a platform to their system.
I used to see that shit decades ago in the phpBB days, you'd get accounts signing up to a board with 20 active users to post climate change denialist articles, even though the website itself had nothing to do with climate change. (Looking back on it now, the oil lobby was probably the first big user of internet forum astroturfing, but somehow nothing ever came of it...)
It can't, but that didn't stop a bunch of gushing articles a while back about how it had an ELO of 2400 and other such nonsense. Turns out you could get it to have an ELO of 2400 under a very very specific set of circumstances, that include correcting it every time it hallucinated pieces or attempted to make illegal moves.
The only Southern Baptists I know were really into demonstrative shows of family unity at church, but would frequently get into massive screaming matches where they'd kick their teenage kids out of the house. They're divorced now.
I hate this analogy. As a throwaway whimsical quip it'd be fine, but it's specious enough that I keep seeing it used earnestly by people who think that LLMs are in any way sentient or conscious, so it's lowered my tolerance for it as a topic even if you did intend it flippantly.
What's hilarious/sad is the response to this article over on reddit's "singularity" sub, in which all the top comments are people who've obviously never got all the way through a research paper in their lives all trashing Apple and claiming their researchers don't understand AI or "reasoning". It's a weird cult.
What's the associated system instruction set to? If you're using the API it won't give you the standard Google Gemini Assistant system instructions, and LLMs are prone to go off the rails very quickly if not given proper instructions up front since they're essentially just "predict the next word" functions at heart.
What's frustrating to me is there's a lot of people who fervently believe that their favourite model is able to think and reason like a sentient being, and whenever something like this comes up it just gets handwaved away with things like "wrong model", "bad prompting", "just wait for the next version", "poisoned data", etc etc...
Yeah, it was never not going to happen. Shareholders demand unending year-on-year growth at all costs, forever, until everything is shit.