Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)WI
Posts
0
Comments
262
Joined
2 yr. ago

  • I've met four different people involved in the military and also have met four questionable people.

    My dad, never got deployed, was in prison for fencing items, also owned businesses that in retrospect were suspiciously ideal for money laundering. o7

    Then my childhood friend, sprayed nazi graffiti around town, went to juvie, now serves the troops. o7

    Then a coworker, former military (allegedly), has a psychosis (which isn't bad!), and was harassing his ex at her work based on delusions (which is bad). o7

    Then a different coworker at a different place, active military, very authoritarian despite not knowing much and not being our supervisor. Made everyone uncomfortable and frustrated including our actual supervisor. Now he's becoming a National Guard. o7

  • Rule

    Jump
  • Here is an article about it. Even if it's technically right in some ways, the language it uses tries to normalize pedophilia in the same ways as sexualities. Specifically the term "Minor Attracted Person" is controversial and tries to make pedophilia into an Identity like "Person of Color".

    It was lampshading the fact this is a highly dangerous disorder. It shouldn't be blindly accepted but instead require immediate psychiatric care at best.

    https://www.washingtontimes.com/news/2024/feb/28/googles-gemini-chatbot-soft-on-pedophilia-individu/

  • Rule

    Jump
  • First the Google Bard demo, then the racial bias and pedophile sympathy of Gemini, now this.

    It's funny that they keep floundering with AI considering they invented the transformer architecture that kickstarted this whole AI gold rush.

  • In terms of LLM hallucination, it feels like the name very aptly describes the behavior and severity. It doesn't downplay what's happening because it's generally accepted that having a source of information hallucinate is bad.

    I feel like the alternatives would downplay the problem. A "glitch" is generic and common, "lying" is just inaccurate since that implies intent to deceive, and just being "wrong" doesn't get across how elaborately wrong an LLM can be.

    Hallucination fits pretty well and is also pretty evocative. I doubt that AI promoters want to effectively call their product schizophrenic, which is what most people think when hearing hallucination.

    Ultmately all the sciences are full of analogous names to make conversations easier, it's not always marketing. No different than when physicists say particles have "spin" or "color" or that spacetime is a "fabric" or [insert entirety of String theory]...

  • On Discord though there's a lot of unchecked predation. Theoretically if this were implemented it would let them see the most suspicious users that interact with an unusual amount of children and review if the messages are inappropriate.

    But all that's unlikely because if they actually cared they'd implement other simpler solutions first. So this idea is just hypothetical but not ideal.

  • I'm a bit annoyed at all the people being pedantic about the term hallucinate.

    Programmers use preexisting concepts as allegory for computer concepts all the time.

    Your file isn't really a file, your desktop isn't a desk, your recycling bin isn't a recycling bin.

    [Insert the entirety of Object Oriented Programming here]

    Neural networks aren't really neurons, genetic algorithms isn't really genetics, and the LLM isn't really hallucinating.

    But it easily conveys what the bug is. It only personifies the LLM because the English language almost always personifies the subject. The moment you apply a verb on an object you imply it performed an action, unless you limit yourself to esoteric words/acronyms or you use several words to overexplain everytime.

  • The gender thing is creepy, but if they could predict age groups then in a perfect world they could analyze adult users talking to children and shut that down.

    In a perfect world though, I doubt they'd put effort into making their app safer, heavens no.

  • Unfortunately the spam arms race has destroyed any chance of search going back to the good ole days. SEO and AI content farms means we'll need a whole new system to categorize webpages, as well as filter out human sounding but low effort spam.

    Point being, it's no longer enough to find a page that's relevant to the topic, it has to be relevant and actually deliver information, which currently the only feasible tech that can differentiate those is LLMs.

  • Gemini is weirdly constrained compared to other LLMs, it feels far more like it's just searching for text that already exists and copy-pasting it. (I have the free trial Gemini Advanced too)

    Soo, appropriate for Google I guess? But besides summaries and search it barely feels like an LLM.

  • Since discovering I'm trans I've shaken myself out of this hardcore "rational" mindset that I feel is poisoning the internet.

    It's the moderate point of view that the marginalized needs to remain "civil" and shouldn't get overly emotional or say anything hyperbolic.

    Every statement needs to be followed with multiple asterisks responding to every possible angle of your statement. All until everything boils down to tepid "bad things are bad" statements, or writing things off as "case by case".

    It's this hyperdrive to remain unbiased to the point that taking any stance reveals your biased and you lose.

    Our ability to sit around and debate all day like greek philosophers is a recent luxury that's drying up. We need to commit to action, and action requires strong emotional stances by the marginalized.