Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)WL
Posts
0
Comments
574
Joined
2 yr. ago

  • Not everyone has a career where dedicating time to learn this stuff is helpful or worthwhile. There's a ton of "useful" skills that all of us don't bother to learn because our time is better spent elsewhere.

  • Lol of course not, so I'll repeat myself and say it's funny how this never comes up in the "death to America" and "such and such is the West's fault" of the other hexbear posts you comment in. I know you're being a contrarian teenager right, but that's the kind of stuff that makes hexbear posters look dumb.

  • Yeah, Marcov Chains are truly the worst thing our species has produced!

    Glad you have your priorities straight. It's been fun talking to a chatbot instead of having a discussion like normal people. You can respond to this comment with whatever responses you want, just know that according to the things I've pretended you said, I've won the argument.

  • Are you in high school? You're making up things I never said and putting a sexual element on your responses for no reason.

    Stay in school and learn how to have discussions before arguing about language and technology lol

    Goodnight.

  • What on Earth is this in response to?? Did I say it was a hard riddle?

    I concede. AI has a superintelligient brain and I'm just so jealous.

    Point to any part of my comment that implied any of this.

    I only gave more info on how LLMs work since what you were describing were Marcov chains. I wasn't saying you were wrong with the thrust of your comment, just the details on how they work. If they were exactly as effective as Marcov chains we wouldn't be having these discussions, that's why they can be misused.

    Feel free to discuss the actual words I'm using instead of this LLM word salad.

  • I think you’re seeing coherence where there is none.

    Ask it to solve the riddle about the fox the chicken and the grains.

    I think it getting tripped up on riddles that people often fail or it not getting factual things correct isn't as important for "believability", which is probably a word closer to what I meant than "coherence."

    No one was worried about misinformation coming from r/SubredditSimulator, for example, because Marcov chains have much much less believability. "Just guessing words" is a bit of a over-simplification for neural nets, which are a powerful technology even if the utility of turning it towards language is debatable.

    And if LLM's weren't so believable we wouldn't be having so many discussions about the misinformation or misuse they could cause. I don't think we're disagreeing I'm just trying to add more detail to your "each word is generated independently" quote, which is patently wrong and detracts from your overall point.

  • I don't disagree, I was just pointing out that "each word is generated independently of each other" isn't strictly accurate for LLM's.

    It's part of the reason they are so convincing to some people, they are able to hold threads semi-coherently throughout entire essay length paragraphs without obvious internal lapses of logic.

  • Generative AI and LLMs start by predicting the next word in a sequence. The words are generated independently of each other

    Is this true? I know that's how Marcov chains work, but I thought neural nets worked differently with larger tokens.