Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)KR
Posts
6
Comments
1,655
Joined
2 yr. ago

  • "How can we promote our bottom of the barrel marketing agency?"

    "I know, let's put a random link to our dot com era website on Lemmy with no context. I hear they love advertising there. We can even secure our own username - look at that branding!! This will be great."

    "Hey intern, get the bags ready. The cash is about to start flowing in, and you better not drop a single bill or we'll get the whip again!"

  • So the paper that found that particular bit in Othello was this one: https://arxiv.org/abs/2310.07582

    Which was building off this earlier paper: https://arxiv.org/abs/2210.13382

    And then this was the work replicating it in Chess: https://www.lesswrong.com/posts/yzGDwpRBx6TEcdeA5/a-chess-gpt-linear-emergent-world-representation

    It's not by chance - there's literally interventions where flipping a weight or vector results in the opposite behavior (like acting like a piece is in a different place, or playing well he badly no matter the previous moves).

    But it's more that it seems unlikely that there's any actual 'feeling' or 'conscious' sentience/consciousness to understand beyond the model knowing what the abstracted pattern means in relation to the inputs and outputs. It probably is simulating some form of ego and self, but not actively experiencing it if it makes sense.

  • Empathize with bullies.

    Ask if everything is ok at home, and let them know if they ever need to talk about things you're there.

    "You seem really angry at things. Are things ok?"

    "I'm sorry life isn't going the best for you right now, but things will get better."

    This is the ultimate mind fuck.

    At first it won't seem like it's working as they need to save face, but within around two to three encounters they'll drop you from their target list because while they won't try to show it, reflecting the truth of what's really going on cuts deep.

    I remember years after HS ending up friends with one of my old bullies who was much more torn up about the whole thing than I ever was, and meeting his absolute psychopath of an older brother and thinking "well this makes sense." His dad was dying of cancer around the time, he was being held back a grade, and his older brother was for sure torturing him at home.

    I know that had I had the awareness I do now back then the poor kid would have folded like a house of cards at the slightest indication I actually saw through his charade.

    The problem was I was a fairly clueless emotional moron at the time and assumed he really did have a beef with me and not that what was going on was that he had a massive issue with himself that was being displaced. This was the same period of time I had a girl who was driving me home park at the area kids went to do drugs and hook up, and I proceeded to cluelessly chat for 30 minutes before she was like "whelp, I guess I'll drive you home." Years later when that one clicked too.

  • So there's two different things to what you are asking.

    (1) They don't know what (i.e. semantically) they are talking about.

    This is probably not the case, and there's very good evidence over the past year in research papers and replicated projects that transformer models do pick up world models from the training data such that they are aware and integrating things at a more conceptual level.

    For example, even a small toy GPT model trained only on chess moves builds an internal structure of the whole board and tracks "my pieces" and "opponent pieces."

    (2) Why do they say dumb shit that's clearly wrong and don't know.

    They aren't knowledge memorizers. They are very advanced pattern extenders.

    Where the answer to a question is part of the pattern they can successfully extend, they get the answer correct. But if it isn't, they confabulate an answer in a similar way to stroke patients who don't know that they don't know the answer to something and make it up as they go along. Similar to stroke patients, you can even detect when this is happening with a similar approach (ask 10x and see how consistent the answer is or if it changes each time).

    They aren't memorizing the information like a database. They are building ways to extend input into output in ways that match as much information as they can be fed. In this, they are beyond exceptional. But they've been kind of shoehorned into the initial tech demo usecase of "knowledgeable chatbot" which is a less than ideal use. The fact they were even good at information recall was a surprise to most researchers.

  • That's a fun variation. The one I test out models with is usually a vegetarian wolf and a carnivorous goat, but the variation to no other objects is an interesting one too.

    By the way, here's Claude 3 Opus's answer:

    The solution is quite simple:

    1. The man gets into the boat and rows himself and the goat across the river to the other side.
    1. Once they reach the other side, both the man and the goat get out of the boat.

    And that's it! Since there are no additional constraints or complications mentioned in the problem, the man and the goat can directly cross the river together using the boat.

  • It was most of the Greeks. We credit Democritus with atomism even though the Greeks said it came from an earlier Phoenician, Mochus of Sidon. Even Democritus's teacher doesn't get credit.

    Democritus wrote it down in a way that survived.

    That's it.

  • Definitely not.

    If anything, them making this version available for free to everyone indicates that there is a big jump coming sooner than later.

    Also, what's going on behind the performance boost with Claude 3 and now GPT-4o on leaderboards in parallel with personas should not be underestimated.

    Edit: After enough of a chance to look more into the details, holy shit we are unprepared for what's around the corner. What this approach even means for things like recent trends in synthetic data is mind blowing.

    They are making this free because they desperately need the new data formats. This is so cool.

  • He's trying to say that the people coming into the country are dangerous criminals, but he's done the talking points so often by now that neither he nor his audience need the connective tissue between the ideas.

    "Oh, now he's doing the Hannibal Lecter bit? Yeah, screw illegals or whatever."

    They have their own coded language at this point where as Trump slips more and more into dementia they still understand what their adoptive hate spewing neo-Nazi grandpa dictator is talking about.

    "And then the blargabaghehhhh...."

    "Exactly. Fuck the blargabaghehhhh...."

    Edit: I don't think I'll ever stop laughing when I see that clip, btw.

  • That's not what happened. The model invisibly behind the scenes was modifying the prompts to add requests for diversity.

    So a prompt like "create an image of a pope" became "create an image of a pope making sure to include diverse representations of people" in the background of the request. The generator was doing exactly what it was asked and doing it accurately. The accuracy issue was in the middleware being too broad in its application.

    I just explained a bit of the background on why this was needed here.