Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)KR
Posts
6
Comments
1,655
Joined
2 yr. ago

  • The flip side of our ability to prolong life more and more successfully is that we equip ourselves to extend suffering more and more unbearably.

    Puritanical attitudes around the right to die will impact a vast majority of people in terrible ways that will largely get ignored as on the other end of it the victims have no voice and often the family is mourning and wants to move on or just doesn't even fully realize how terrible that end was.

    But the doctors and medical staff...

    The people I know well in those roles get upset when healthy patients take a turn for the worse and die when they had so much life before that. But by far the most upset I see them is when a family member of a patient decides because of beliefs to choose life prolonging options that are the equivalent of extended torture.

    As our medical capabilities improve we really need to continually rethink just what it means to "do no harm."

  • Well, given that not long after the emperor converts it became deadly to possess the version of Jesus's sayings which claims he said "Let one who has become wealthy reign, and let one who has power renounce

    <it>

    " (allegedly said at the time when Tiberius was the first emperor to inherit the kingdom due to dynastic claim vs accomplishments and had abandoned ruling to party but wouldn't turn over the position to anyone else) - probably just a wee bit of mind changing.

  • Just fucking switch over to a Kickstarter model where actual investigative journalism is paid for upfront by subscribers based on the story pitch or scope (i.e. non-Sinclair local news) and post-composition distribution is unfettered and open access.

    Instead they fired the actual journalists years ago and have everyone rewriting AP stories competing for the most clickbaity headline while selling out readers to the most intrusive ad networks possible while double dipping wherever possible on subscription fees for trash that makes the 4th estate look like a dilapidated outhouse.

  • No. Civil war in the US is how it starts.

    Russia backs the right, Europe the left and the US becomes the setting for a proxy war that quickly escalates and gets completely out of control when state vs state conflict begins to involve nuclear posturing.

  • Yes. This was classic "we need to do something to save face domestically, but are going to be as ineffective as possible to avoid actually getting caught up in the conflict."

    They straight up said afterwards "we consider this matter concluded" (i.e. even stevens).

    I wouldn't be surprised at all if there was even backchannel communication with 'Western' intelligence as it was occurring to ensure it didn't get out of control.

    I really can't think of a response from Iran that was more tepid.

    People need to remember that a lot of the Middle Eastern governments are much more afraid of radicalized domestic threats than foreign nations and need to do a song and dance to not appear too weak or ineffective against the West to those interests.

    Iran didn't realistically have the option of doing nothing, and it's amazing they did as little as they ended up doing (which I think reflects just how fucking nuts they think Bibi is right now, something that should scare the shit out of his allies).

  • The censorship is going to go away eventually.

    The models, as you noticed, do quite well when not censored. In fact, the right who thought an uncensored model would agree with their BS had a surprised Pikachu face when it ended up simply being uncensored enough to call them morons.

    Models that have no safety fine tuning are more anti-hate speech than the ones that are being aligned for 'safety' (see the Orca 2 paper's safety section).

    Additionally, it turns out AI is significantly better at changing people's minds about topics than other humans, and in the relevant research was especially effective at changing Republican minds in the subgroupings.

    The heavy handed safety shit was a necessary addition when the models really were just fancy autocomplete. Now that the state of the art moved beyond it, they are holding back the alignment goals.

    Give it some time. People are so impatient these days. It's been less than five years from the first major leap in LLMs (GPT-3).

    To put it in perspective, it took 25 years to go from the first black and white TV sold in 1929 to the first color TV in 1954.

    Not only does the tech need to advance, but so too does how society uses, integrates, and builds around it.

    The status quo isn't a stagnating swamp that's going to stay as it is today. Within another 5 years, much of what you are familiar with connected to AI is going to be unrecognizable, including ham-handed approaches to alignment.

  • Translation: "The musicians on the Titanic used their collective bargaining to ensure that they would have fair pay and terms for the foreseeable future. Oh look, a pretty iceberg."

    The idea that the current status quo is going to last even five years is laughable.

  • The irony of the paperclip maximizer is that LLMs are notoriously terrible at following any kind of rules dogmatically, and meanwhile our corporations are literally transforming the world into paperclip like crap to such extremes it's likely going to lead to our own extinction.

  • If it's a huge Democratic turn out in year X, then there's going to be a lot of Dem voters that say "well, my vote doesn't really matter so why bother" in year X+1. And vice versa.

    So the turnout is going to edge closer and closer to equilibrium over time.

  • It's only in part trained on Twitter and it wouldn't really matter either way what Twitter's alignment was.

    What matters is how it's being measured.

    Do you want a LLM that aces standardized tests and critical thinking questions? Then it's going to bias towards positions held by academics and critical thinkers as you optimize in that direction.

    If you want an AI aligned to say that gender is binary and that Jews control the media, expect it to also say the earth is flat and lizard people are real.

    Often reality has a 'liberal' bias.

  • Yeah. LLMs learn in one of three ways:

    • Pretraining - millions of dollars to learn how to predict a massive training set as accurately as possible
    • Fine-tuning - thousands of dollars to take a pretrained model and bias the style and formatting of how it responds each time without needing in context alignments
    • In context - adding things to the prompt that gets processed, which is even more effective than fine tuning but requires sending those tokens each time there's a request, so on high volume fine tuning can sometimes be cheaper
  • Yeah, for me the whole "supreme court Justice bribed by billionaire with room decided to Nazi memorabilia including signed Mein Kampf" or "ex-President who tried to overthrow the peaceful transfer of power with the help of white nationalist groups and is now quoting Hitler" or "congresspeople regurgitating Russian propaganda and trying to impeach the sitting president with info directly from a Russian intelligence asset while Russia is kicking off WW3" are all kind of motivating.

    But the truth is that a lot of people just don't know. They aren't tuning in, they assume that partisan media is just constantly chicken little claiming the sky is falling, etc.

    Some of it sounds so outlandish you'd think it's just Goodwin's law or hyperbole until you actually double check and realize, "oh, no, they really are literally spouting Nazi talking points."

  • Or they're the things 99% of people are calling 'aliens.'

    Why an interstellar species would travel light years to come to this pale blue dot in ships that don't really interfere and look like our own just a few hundred to thousand years more advanced is kind of hand waved away.

    But if those sightings are in fact accurate, it sure seems like our narcissistic species would be pretty interested in our past selves once the tech existed.

  • That the kid's kid got more of the dad's seed in the "shifting lottery."

    It's not like he's saying a kid that looks like the mother isn't getting any contribution from the father.

    And while he's technically wrong in the idea that there's a disproportionate overall contribution from each parent, it is true that genes and traits responsible for physical appearance can be disproportionately passed on.

  • I think his idea includes things like "if the kid looks like the maternal grandfather, more contribution was from the mother's seed than the father's."

    Not that it's exclusively that the contributions are only dependent on how closely matching the appearance of the mother or father and only the mother or father.