Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)KR
Posts
6
Comments
1,655
Joined
2 yr. ago

  • The majority of people right now are fairly out of touch with the actual capabilities of modern models.

    There's a combination of the tech learning curve on the human side as well as an amplification of stories about the 0.5% most extreme failure conditions by a press core desperate to feature how shitty the technology they are terrified of taking their jobs is.

    There's some wild stuff most people just haven't seen.

  • With the advances in AI this is no longer as reliable an option, and will quickly become less so.

    But absolutely letting someone know where you are going and as many identifying details about who you are meeting is wise. As is having them check in on you by a certain time after.

  • Before video games we were blaming rock music and Marilyn Manson for violence.

    Marilyn Manson's first song was released in 1992.

    Video games were being blamed for violence by that time, and there was even a congressional hearing on the topic of video games and violence in 93-94.

  • You're kind of missing the point. The problem doesn't seem to be fundamental to just AI.

    Much like how humans were so sure that theory of mind variations with transparent boxes ending up wrong was an 'AI' problem until researchers finally gave those problems to humans and half got them wrong too.

    We saw something similar with vision models years ago when the models finally got representative enough they were able to successfully model and predict unknown optical illusions in humans too.

    One of the issues with AI is the regression to the mean from the training data and the limited effectiveness of fine tuning to bias it, so whenever you see a behavior in AI that's also present in the training set, it becomes more amorphous just how much of the problem is inherent to the architecture of the network and how much is poor isolation from the samples exhibiting those issues in the training data.

    There's an entire sub dedicated to "ate the onion" for example. For a model trained on social media data, it's going to include plenty of examples of people treating the onion as an authoritative source and reacting to it. So when Gemini cites the Onion in a search summary, is it the network architecture doing something uniquely 'AI' or is it the model extending behaviors present in the training data?

    While there are mechanical reasons confabulations occur, there are also data reasons which arise from human deficiencies as well.

  • It's beginning to look like Anthropic's recent interpretability research didn't just uncover a "golden gate feature" in their production model, but some kind of "sensations related to the golden gate" feature.

    I'm excited to see what more generative exploration of the model variation with that feature vector maximized ends up showing.

    I have a suspicion that it's the kind of thing that's going to blow minds as it becomes clearer.

  • Nope, but there's a whole thread of people talking about how LLMs can't tell what's true or not because they think it is, which is deliciously ironic.

    It seems like figuring out what's bullshit on the Internet is an everyone problem.

  • This was one of the few things that Lucretius was very wrong about in De Rerum Natura.

    Nailed survival of the fittest, quantized light, different mass objects falling at the same rate in a vacuum.

    But the Epicurean cosmology was pretty bad and he suggested that the moon and sun were both roughly the size we see them as in the sky.

    Can't get them all right.

  • If you want an unsettling thing to think about, look into Calhoun's rats.

    Social media has essentially overcrowded our functional distance with one another.

    You could be on a ranch with no one around for miles, and yet you have hundreds if not thousands of other people are directly interacting with you giving and competing for dopamine hits.

    And just like the rats we now have people not leaving their domiciles, being apathetic, hedonistic, etc. We're mentally falling apart because we're just too overcrowded.

    "Tankie!" "Fascist!" "Heathen!" "Religious nut!" "Zionist!" "Antisemite!"

    (Eventually the rats end up eating each other.)

    Yet here we both are. That dopamine drip sure is nice...

    Don't forget to like and subscribe!

  • A lot of victim blaming in this thread.

    I don't agree with their theological views, and I don't love that indoctrination is so often tied to humanitarian efforts.

    But these are people who were trying to help out children in the middle of the second worst humanitarian crisis in the world right now, and the one that very very few people are giving much attention to as the other dominates news cycles and most of the Western world has just written off Africa as a whole (oops). (Also, I think this may be the only story about Hati on Lemmy in recent history, in fact).

    They didn't 'deserve' getting brutally murdered for sticking around and not abandoning the children's schools and homes they spent the past decades cultivating.

    They did a lot more to help people than I ever have, even if a key factor in their doing so was what I might consider delusional thinking.

    And so even if I'm not a fan of some aspects of their lives, I respect what they did do, and think it's a bit fucked up to be making light of their deaths.

  • The level of detail in Helldivers 2 is insane for the type of game and company size.

    Deformable terrain and buildings, enemy animations when you shoot off different limbs and they keep moving towards you, your cape burns off more and more as you use your jetpack, etc.

    Call of Duty has 3,000 devs working on their titles.

    Arrowhead has around 100 employees total.

    I very much believe this game took that long with a team that size, and it shows and is a large part of why it's been so successful.

  • We could be using AI to predict energy usage and reduce our dependence on fossil fuels

    That's happening

    or help in discovering new protein folds

    That too.

    There's always been barnacles on the ship of progress. That doesn't mean it's only barnacles.