Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)JJ
Posts
1
Comments
2,790
Joined
2 yr. ago

  • Phantasy Star 2 has a named NPC die and it has a weird excuse why you can't revive her at the clone labs, even though you could just fine up to that point.

    Also when everyone else was like "omg no character has died in an RPG before" youth me was sitting there like "is phantasy star a joke to you?"

  • There was a thread about cheating the other day and someone posted that they think cheating is... How did they put it... binary? Like there are social groups where everyone cheats and its normal, and then there are non overlapping groups where no one cheats.

    Ah, I found it: https://lemm.ee/comment/20529741

    I don't think I know anyone who cheats in relationships.

  • This reminds me of the new vector for malware that targets "vibe coders". LLMs tend to hallucinate libraries that don't exist. Like, it'll tell you to add, install, and use jjj_image_proc or whatever. The vibe coder will then get an error like "that library doesn't exist" and "can't call jjj_image_proc.process()`.

    But you, a malicious user, could go and create a library named jjj_image_proc and give it a function named process. Vibe coders will then pull down and run your arbitrary code, and that's kind of game over for them.

    You'd just need to find some commonly hallucinated library names

  • As the title says, moderation is key. If the game is just "whatever is the most convincing right now" then I'm going to be annoyed that I sat down to play D&D/fate/gurps/whatever, and we're mostly playing improv. It's important to set expectations in or before session 0.

    If I was looking to join a game, and the GM was like "We're all about the rule of cool", I'd probably ask for some examples. If it's like "we let the [D&D 5e] wizard cast as many spell as he wants" then I'm not joining, because that's going to fuck up the game balance. On the other hand if it's like "we don't really care about carry weight unless it's extreme", that's fine.

    Stuff in the middle, like "one time we let them use create water in the bad guy's lungs to drown him!" can go either way, but I'm usually not a fan. Mostly if I ask myself "if this works, why doesn't the whole setting revolve around it?" and don't have a good answer, I won't enjoy it. Like, if everyone could do lethal damage with a cantrip, or if the "peasant railgun" worked like the joke, or "we let the real life chemical engineer make napalm and mustard gas as a 1st level rogue for massive damage", then that probably isn't for me.

  • If there is a civil war, I'm sure the enemies of the US would rejoice. It's like that onion article that's like "al queda decides to sit back and watch US destroy itself".

    But aside from that, I hope the conservatives lose. And I hope after they lose, we learn from history. Don't just let them come crawling back into power like after the first civil war. The ultra rich and their lackeys need to be removed from power, and kept out.

    Then again, the 14th amendment should disqualify Trump and a bunch of the republicans, and that doesn't seem to matter.

  • The conservative mindset seems to be "What's good for me right now?". The law is good when it hurts their enemies, and it's unfair when it hurts them. A policy is good when it benefits them, and bad when it benefits someone they don't like. They are essentially toddlers. We should treat their ideas as seriously as we'd treat a two year old's ideas. Yes dear that's a really interesting idea to replace all the toilets in the building with monster trucks, but we're not going to do that.

  • Many people have found that using LLMs for coding is a net negative. You end up with sloppy, vulnerable, code that you don't understand. I'm not sure if there have been any rigorous studies about it yet, but it seems very plausible. LLMs are prone to hallucinating, so you're going to get it telling you to import libraries that don't exist, or use parts of the standard library that don't exist.

    It also opens up a whole new security threat vector of squatting. If LLMs routinely try to install a library from pypi that doesn't exist, you can create that library and have it do whatever you want. Vibe coders will then run it, and that's game over for them.

    So yeah, you could "rigorously check" it but a. all of us are lazy and aren't going to do that routinely (like, have you used snapshot tests?), b. it's going to anchor you around whatever it produced, making it harder to think about other approaches, and c. it's often slower overall than just doing a good job from the start.

    I imagine there are similar problems with analyzing large amounts of text. It doesn't really understand anything. To verify it's correct, you would have to read the whole thing yourself anyway.

    There are probably specialized use cases that are good- I'm told AI is useful for like protein folding and cancer detection- but that still has experts (I hope) looking at the results.

    To your point, I think people are trying to use these LLMs for things with definite answers, too. Like if I go to google and type in "largest state in the US" it uses AI. This is not a good use case.

  • That's really not the same thing at all.

    For one, no one knows what the weather will be like tomorrow. We have sophisticated models that do their best. We know the capital of New Jersey. We don't need a guessing machine to tell us that.

  • You shouldn't trust anything the LLM tells you though, because it's a guessing machine. It is not credible. Maybe if you're just using it for translation into your native language? I'm not sure if it's good at that.

    If you have access to the internet, there are many resources available that are more credible. Many of them free.

  • Other people have some really good responses in here.

    I'm going to echo that AI is highlighting the problems of capitalism. The ownership class wants to fire a bunch of people and replace them with AI, and keep all that profit for themselves. Not good.

  • I really don't think creating for real artificial intelligence is a good idea. I mean that's peak "don't invent the torment Nexus"

    Are you going to give it equal rights? How is voting going to work when the AI can create an arbitrary number of itself and vote as a bloc?

    Creating an intelligent being to be your slave is fucked up, too.

    Just... We don't need that right now. We have other more pressing problems with fewer ethical land mines