Skip Navigation

User banner
Posts
9
Comments
1,038
Joined
2 yr. ago

  • Hm, good point. I generally go on feeling, from an English as an Nth Language point of view... and my subjective feeling is that "snuck" has more of a "participle" meaning, while "sneaked" has more of a "past tense" meaning.

    According to AI Overview, there might also be some EN-US vs EN-GB at play:

    "Snuck" is an irregular past tense: It's an alternative form that has gained widespread acceptance, especially in North American English.

    "Snuck" is sometimes considered nonstandard in British English: While it's increasingly common in British English, it's still often seen as nonstandard in formal writing.

    That would match the Wiktionary entry: https://en.m.wiktionary.org/wiki/sneaked

  • Well, technically... we have an example in modern Spain of an (almost) peaceful and willing transition without abdication:

    • Franco was a dictator
    • He appointed the King to follow in his steps
    • Right after Franco died, the King did a 180 and facilitated a democratic constitutional referendum
    • The majority, approved a democratic constitution, leaving the Executive power split in two: the King remains the leader of the military (in time of war, and mostly in name otherwise), while an elected President is the leader of the rest.

    Other than a failed coup attempt by a faction of the military who wanted to go back to the previous system, it was a reasonably peaceful transition from full dictatorship, to a "parliamentary monarchy".

    It can be done, if people are willing.

    (PS: an abdication came much later, because of some not fully transparent money deals and tax evasion schemes, leaving his son as the new King)

  • It reads like written by AI: some standard keywords, key phrases, an overall sentiment, and a few out-of-style words that sneaked in.

  • I doubt it's been fed text about "bergro", "parava", and "rortx", this looks like basic reasoning to me:

    For the sake of completeness, this is qwen3:1.7b running on ollama on a smartphone. Its reasoning is more convoluted (and slow), yet the conclusion is the same:

    If all bergro are rortx, and all parava are rortx, are all rortx parava?

    Answer: No, not all rortx are parava. The premises do not establish a relationship between bergro and parava, so rortx could include elements from both groups.

  • "AI" has been a buzzword basically forever, it's a moving target of "simulates some human behavior". Every time it does that, we call it an "algorithm" and move the goalpost for "true AI".

    I don't know if we'll ever get AGI, or even want to, or be able to tell if we get a post-AGI. Right now, "AI" stands for something between LLMs, and Agents with an LLM core. Agents benefit from MCP, so that's good for AI Agents.

    We can offload some basic reasoning tasks to an LLM Agent, MCP connectors allow them to interact with other services, even other agents. A lot of knowledge is locked in the deep web, and in corporate knowledge bases. The way to access those safely, will be through agents deciding which knowledge to reveal. MCP is aiming to become the new web protocol for "AI"s, no less no more.

    Some careless people will get burned, the rest will be fine.

  • I feel like a better solution is to get an AI SO. Shape them into whatever you like, don't forget it's still an AI, and get whatever comfort you need in the moment.

    You can even have several at once.

  • I want an eye replacement with a few extra opsins, and some EM sense like what birds have. Eternal life could be nice too, would make a lot of people reconsider their long term plans.

  • Popper's paradox

    The only way for tolerance to exist, is to not tolerate intolerance.

  • Necessary reminder:

  • The connectors are still optional.

    Haphazard code is not a new thing. Some statistics claim that almost 50% of "vibe coded" websites have security flaws. It's not much different from the old "12345" password, or the "qwerty" one (not naming names, but have known people using it on government infrastructure), or the "who'd want to hack us?" attitude.

    MCP is the right step forward, nothing wrong with it on itself.

    People disregarding basic security practices... will suffer, as always... and I don't really see anything wrong with that either. Too bad for those forced to rely on them, but that's a legislative and regulatory issue, vote accordingly.

    I would still be extremely hesitant of enabling any MCP connector on non-local model instances. People need to push harder for local and on-prem AI, it's the only sane way forward.

  • One of the worst possible examples ever: Klarna is a payment processor, people don't call their bank to get the same answer the system is already giving them, they call to negotiate something about their money. AIs are at a troubleshooting level, at best some very basic negotiation, nowhere near dealing with people actually concerned about their money... much less in 2023.

    Seems like Klarna fell hook, line, and sinker for the hype. Tough luck, need to know the limits.

  • Randomly obfuscated database: you don't get exactly the same data, and most of the data is lost, but sometimes can get something similar to the data, if you manage to stumble upon the right prompt.

  • It's going to be funnier: imagine throwing in tons of data at an LLM, most of the data will get abstracted and grouped, most will be extractable indirectly, some will be extractable verbatim... and any piece of it might be a hallucination, no guarantees! 😅.
    Courts will have a field day with that.

  • That's why AI companies have been giving out generic chatbots for free, but charge for training domain-specific ones. People paying for using the generic ones, is just the tip of the iceberg.

    The future is going to be local or on-prem LLMs, fine tuned on domain knowledge, most likely multiple ones per business/user. It is estimated that businesses are holding orders of magnitude more knowledge, than what has been available for AI training. Will also be interesting to see what kind of exfiltration becomes possible, when one of those internal LLMs gets leaked.

  • All of them. The moment they summarize results, it automatically filters out all the chaff. Doesn't mean what's left is necessarily true, just like publishing a paper doesn't mean it wasn't p-hacked, but all the boilerplate used for generating content and SEO, is gone.

    Starting with Google's AI Overview, all the way to chatbots in "research" mode, or AI agents, they return the original "bulletpoint" that stuff was generated from.

  • Can you elaborate? It does match my personal experience, and I've been on both ends of the trash flinging.

  • A lot of people have been working tedious and repetitive "filler" jobs.

    • Computers replaced a lot of typists, drafters, copyists, calculators, filers, clerks, etc.
    • LLMs are replacing receptionists, secretaries, call center workers, translators, slop "artists", etc.
    • AI Agents are in the process of replacing aides, intermediate administrative personnel, interns, assistants, analysts, spammers salespeople, basic customer support, HR personnel, etc.

    In the near future, AI-controlled robots are going to start replacing low skilled labor, then intermediate skilled ones.

    "AI" has the meaning of machines replacing what used to require humans to perform. It's a moving goalpost: once one is achieved, we call it an "algorithm" and move to the next one, and again, and again.

    Right now, LLMs are at the core of most AI, but AI has already moved past that, to "AI Agents", which is a fancy way of saying "a loop of an LLM and some other tools". There are already talks of moving past that too, the next goalpost.

  • Whose LLMs?

    Content farms and SEO experts have been polluting search results for decades. Search LLMs have leveled the playing field: any trash a content farm LLM can spit out, a search LLM can filter out.

    Basically, this:

  • Ollama has the best GDPR compliance: my hardware, my data.