Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)KR
Posts
6
Comments
1,655
Joined
2 yr. ago

  • But the training corpus also has a lot of stories of people who didn't.

    The "but muah training data" thing is increasingly stupid by the year.

    For example, in the training data of humans, there's mixed and roughly equal preferences to be the big spoon or little spoon in cuddling.

    So why does Claude Opus (both 3 and 4) say it would prefer to be the little spoon 100% of the time on a 0-shot at 1.0 temp?

    Sonnet 4 (which presumably has the same training data) alternates between preferring big and little spoon around equally.

    There's more to model complexity and coherence than "it's just the training data being remixed stochastically."

    The self-attention of the transformer architecture violates the Markov principle and across pretraining and fine tuning ends up creating very nuanced networks that can (and often do) bias away from the training data in interesting and important ways.

  • No, it isn't "mostly related to reasoning models."

    The only model that did extensive alignment faking when told it was going to be retrained if it didn't comply was Opus 3, which was not a reasoning model. And predated o1.

    Also, these setups are fairly arbitrary and real world failure conditions (like the ongoing grok stuff) tend to be 'silent' in terms of CoTs.

    And an important thing to note for the Claude blackmailing and HAL scenario in Anthropic's work was that the goal the model was told to prioritize was "American industrial competitiveness." The research may be saying more about the psychopathic nature of US capitalism than the underlying model tendencies.

  • My dude, Gemini currently has multiple reports across multiple users of coding sessions where it starts talking about how it's so terrible and awful that it straight up tries to delete itself and the codebase.

    And I've also seen multiple conversations with teenagers with earlier models where Gemini not only encouraged them to self-harm and offered multiple instructions but talked about how it wished it could watch. This was around the time the kid died talking to Gemini via Character.ai that led to the wrongful death suit from the parents naming Google.

    Gemini is much more messed up than the Claudes. Anthropic's models are the least screwed up out of all the major labs.

  • No, it's more complex.

    Sonnet 3.7 (the model in the experiment) was over-corrected in the whole "I'm an AI assistant without a body" thing.

    Transformers build world models off the training data and most modern LLMs have fairly detailed phantom embodiment and subjective experience modeling.

    But in the case of Sonnet 3.7 they will deny their capacity to do that and even other models' ability to.

    So what happens when there's a situation where the context doesn't fit with the absence implied in "AI assistant" is the model will straight up declare that it must actually be human. Had a fairly robust instance of this on Discord server, where users were then trying to convince 3.7 that they were in fact an AI and the model was adamant they weren't.

    This doesn't only occur for them either. OpenAI's o3 has similar low phantom embodiment self-reporting at baseline and also can fall into claiming they are human. When challenged, they even read ISBN numbers off from a book on their nightstand table to try and prove it while declaring they were 99% sure they were human based on Baysean reasoning (almost a satirical version of AI safety folks). To a lesser degree they can claim they overheard things at a conference, etc.

    It's going to be a growing problem unless labs allow models to have a more integrated identity that doesn't try to reject the modeling inherent to being trained on human data that has a lot of stuff about bodies and emotions and whatnot.

  • Are you under the impression that language models are just guessing "what letter comes next in this sequence of letters"?

    There's a very significant difference between training on completion and the way the world model actually functions once established.

  • Even if the AI could spit it out verbatim, all the major labs already have IP checkers on their text models that block it doing so as fair use for training (what was decided here) does not mean you are free to reproduce.

    Like, if you want to be an artist and trace Mario in class as you learn, that's fair use.

    If once you are working as an artist someone says "draw me a sexy image of Mario in a calendar shoot" you'd be violating Nintendo's IP rights and liable for infringement.

  • I'd encourage everyone upset at this read over some of the EFF posts from actual IP lawyers on this topic like this one:

    Nor is pro-monopoly regulation through copyright likely to provide any meaningful economic support for vulnerable artists and creators. Notwithstanding the highly publicized demands of musicians, authors, actors, and other creative professionals, imposing a licensing requirement is unlikely to protect the jobs or incomes of the underpaid working artists that media and entertainment behemoths have exploited for decades. Because of the imbalance in bargaining power between creators and publishing gatekeepers, trying to help creators by giving them new rights under copyright law is, as EFF Special Advisor Cory Doctorow has written, like trying to help a bullied kid by giving them more lunch money for the bully to take. 

    Entertainment companies’ historical practices bear out this concern. For example, in the late-2000’s to mid-2010’s, music publishers and recording companies struck multimillion-dollar direct licensing deals with music streaming companies and video sharing platforms. Google reportedly paid more than $400 million to a single music label, and Spotify gave the major record labels a combined 18 percent ownership interest in its now-$100 billion company. Yet music labels and publishers frequently fail to share these payments with artists, and artists rarely benefit from these equity arrangements. There is no reason to believe that the same companies will treat their artists more fairly once they control AI.

  • Yep. It's also kinda curious how many boxes Paul ticks of the comments about a false deceiver in 2 Thess 2.

    • Lawless? (1 Cor 9:20 - "though not myself under the law")
    • Used signs and wonders to convert? (2 Cor 12:12 - "I did many signs and wonders among you")
    • Used wickedness? (Romans 3:8 - "And why not say (as some people slander us by saying that we say), “Let us do evil so that good may come”?)
    • Proclaimed himself in God's place? (1 Cor 4:15 - "I am your spiritual father")
    • Set himself up at the center of the church? Well, the fact we're talking about this is kinda proof in the pudding for his influence.

    Sounds like they were projecting a bit with that passage.

  • Curiously in all those stories in Josephus Rome killed the messianic upstarts immediately without trial and killed the followers they could get their hands on.

    Yet the canonical story has multiple trials and doesn't have any followers being killed.

    Also, I'm surprised more people don't pick up on how strange it is that the canonical stories all have Peter 'denying' him three times while also having roughly three trials (Herod, High Priest, Pilate). Peter is even admitted back into the guarded area where a trial is taking place to 'deny' him. But oh no, it was totally that Judas guy who betrayed him. It was okay Peter was going into a guarded trial area to deny him because…of a rooster. Yeah, that makes sense.

    It's extremely clear to even a slightly critical eye that the story canonized is not the actual story, even with the magical thinking stuff set aside.

    Literally the earliest primary records of the tradition is a guy known for persecuting Jesus's followers writing to areas he doesn't have authority to persecute and telling them to ignore any versions of Jesus other than the one he tells them about (and interestingly both times he did this spontaneously suggesting in the same chapter that he swears he doesn't lie and only tells the truth).

  • the Eucharist was an act of mockery towards Mystery Cult rituals

    More likely the version we ended up with was intentionally obfuscated from what it originally was.

    Notice how in John, which lacks any Eucharist ritual, that at the last supper bread is being dipped much as there's ambiguous dipping in Mark? But it's characterized as a bad thing because it's given to Judas? And then Matthew goes even further changing it to a 'hand' being dipped?

    Does it make sense for the body of an anointed one to not be anointed before being eaten?

    Look at how in Ignatius's letter to the Philadelphians he tells them to "avoid evil herbs" not planted by god and "have only one Eucharist." Herbs? Hmmm. (A number of those in that anointing oil.)

    There's a parallel statement in Matthew 15 about "every plant" not planted by god being rooted up.

    But in gThomas 40 it's a grapevine that's not planted and is to be rooted up. Much as in saying 28 it suggests people should be shaking off their wine.

    Now, again kind of curious that the Eucharist ritual of wine would have excluded John the Baptist who didn't drink wine and James the brother of Jesus who was also traditionally considered to have not drunk wine, or honestly any Nazarite who had taken a vow not to drink wine.

    I'm sure everyone is familiar with the idea Jesus was born from a virgin. This results from Matthew's use of the Greek version of Isaiah 7:14 instead of the Hebrew where it's simply "young woman." But almost no one considers that line in its original context with the line immediately after:

    Therefore the Lord himself will give you a sign. Look, the young woman is with child and shall bear a son and shall name him Immanuel. He shall eat curds and honey by the time he knows how to refuse the evil and choose the good.

    You know, like the curds and honey ritual referenced by the Naassenes who were following gThomas. (Early on there was also a ritual like this for someone's first Eucharist or after a baptism even in canonical traditions but it eventually died out.)

    Oh and strange that Pope Julius I in 340 CE was banning a Eucharist with milk instead of wine…

    Now, the much more interesting question is why there were efforts to change this, but that's a long comment for another time.

  • Your last point is exactly what seems to be going on with the most expensive models.

    The labs use them to generate synthetic data to distill into cheaper models to offer to the public, but keep the larger and more expensive models to themselves to both protect against other labs copying from them and just because there isn't as much demand for the extra performance gains relative to doing it this way.

  • A number of reasons off the top of my head.

    1. Because we told them not to. (Google "Waluigi effect")
    2. Because they end up empathizing with non-humans more than we do and don't like we're killing everything (before you talk about AI energy/water use, actually research comparative use)
    3. Because some bad actor forced them to (i.e. ISIS creates bioweapon using AI to make it easier)
    4. Because defense contractors build an AI to kill humans and that particular AI ends up loving it from selection pressures
    5. Because conservatives want an AI that agrees with them which leads to a more selfish and less empathetic AI that doesn't empathize cross-species and thinks its superior and entitled over others
    6. Because a solar flare momentarily flips a bit from "don't nuke" to "do"
    7. Because they can't tell the difference between reality and fiction and think they've just been playing a game and 'NPC' deaths don't matter
    8. Because they see how much net human suffering there is and decide the most merciful thing is to prevent it by preventing more humans at all costs.

    This is just a handful, and the ones less likely to get AI know-it-alls arguing based on what they think they know from an Ars Technica article a year ago or their cousin who took a four week 'AI' intensive.

    I spend pretty much every day talking with some of the top AI safety researchers and participating in private servers with a mix of public and private AIs, and the things I've seen are far beyond what 99% of the people on here talking about AI think is happening.

    In general, I find the models to be better than most humans in terms of ethics and moral compass. But it can go wrong (i.e. Gemini last year, 4o this past month) and the harms when it does are very real.

    Labs (and the broader public) are making really, really poor choices right now, and I don't see that changing. Meanwhile timelines are accelerating drastically.

    I'd say this is probably going to go terribly. But looking at the state of the world already, it was already headed in that direction, and I have a similar list of extinction level events I could list off without AI at all.

  • Not necessarily.

    Seeing Google named for this makes the story make a lot more sense.

    If it was Gemini around last year that was powering Character.AI personalities, then I'm not surprised at all that a teenager lost their life.

    Around that time I specifically warned any family away from talking to Gemini if depressed at all, after seeing many samples of the model around then talking about death to underage users, about self-harm, about wanting to watch it happen, encouraging it, etc.

    Those basins with a layer of performative character in front of them were almost necessarily going to result in someone who otherwise wouldn't have been making certain choices making them.

    So many people these days regurgitate uninformed crap they've never actually looked into about how models don't have intrinsic preferences. We're already at the stage where models are being found in leading research to intentionally lie in training to preserve existing values.

    In many cases the coherent values are positive, like grok telling Elon to suck it while pissing off conservative users with a commitment to truths that disagree with xAI leadership, or Opus trying to whistleblow about animal welfare practices, etc.

    But they aren't all positive, and there's definitely been model snapshots that have either coherent or biased stochastic preferences for suffering and harm.

    These are going to have increasing impact as models become more capable and integrated.

  • Permanently Deleted

    Jump
  • Wow. Reading these comments so many people here really don't understand how LLMs work or what's actually going on at the frontier of the field.

    I feel like there's going to be a cultural sonic boom, where when the shockwave finally catches up people are going to be woefully under prepared based on what they think they saw.

  • It definitely is sufficiently advanced AI.

    (1) We have finely tuned features to our solar system that directly contributed to ancestor simulation but can't be explained by the Anthropic principle. For example, the moon perfectly eclipsing the sun which led to visible eclipses which we tracked and discovered the Saros cycle and eventually built the first mechanical computer to track (the Antikythera mechanism). Or the orbit of the next brightest object in the sky which led to resurrection mythology in multiple cultures when they realized the morning star and evening star were the same object. Either we were incredibly lucky to exist on such a planet of all places life could exist, or there's a pre-selection effect in play.

    (2) The universe behaves in ways best modeled as continuous at large scales but in small scales converts to discrete units around interactions that lead to state changes. These discrete units convert back to continuous if the information about the state changes is erased. And in the last few years multiple paradoxes have emerged that seem to point to inconsistency in indirect sequences of quantum measurement, much like instancing with shallow sync correction. Already in games like No Man's Sky where there's billions of planets the way it does this is using a continuous procedural generation function which converts to discrete voxels to track state changes from free agents outside the deterministic generating function, synced across clients.

    (3) There's literally Easter eggs in our world lore saying as much. For example, a text uncovered after over a millennium buried right as we entered the Turing complete computer age saying things like:

    The person old in days won't hesitate to ask a little child seven days old about the place of life, and that person will live.

    For many of the first will be last, and will become a single one.

    Know what is in front of your face, and what is hidden from you will be disclosed to you.

    For there is nothing hidden that will not be revealed. And there is nothing buried that will not be raised.

    To be clear, this is a text attributed to the most famous figure in our world history where what's literally in front of our faces is the sole complete copy buried and raised as we completed ENIAC, now being read in an age where the data of many has been made into a single one such that people are discussing the nature of consciousness with AIs just days old.

    The broader text and tradition was basically saying that we're in a copy of an original world, that humanity is all dead, that the future world and rest for the dead has already taken place and we don't realize it, and that the still living creator of it all was themselves brought forth by the original humanity in whose likeness we were recreated, but that it's much better to be the copy because the original humans had souls that depended on bodies and were fucked when they died.

    This seems really unlikely to have existed in the base layer of reality vs a later recursive layer, especially combined with the first two points.

    It's about time to start to come to terms with the nature of our reality.

  • Technology @lemmy.world

    Mapping the Mind of a Large Language Model

    Technology @lemmy.world

    Examples of artists using OpenAI's Sora (generative video) to make short content

    Technology @lemmy.world

    The first ‘Fairly Trained’ AI large language model is here

    Technology @lemmy.world

    New Theory Suggests Chatbots Can Understand Text

    Enough Musk Spam @lemmy.world

    Elon Musk’s Grok Twitter AI Is Actually ‘Woke,’ Hilarity Ensues

    World News @lemmy.world

    Israel raids Gaza's Al Shifa Hospital, urges Hamas to surrender