Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)ON
Posts
15
Comments
228
Joined
2 yr. ago

  • I've seen some shops put aside the extra shot if they know another customer has ordered one and they can serve it before it sits around too long. Otherwise, you can dose the portafilter with less coffee for a single.

  • According to Google book search, the phrase goes at least back to the 1800s. It's interesting to see that spike between 1900 and 1950. I'd bet it's related to the horrors of WW1.

    As for what it means, other posters have answered: To have faith in humanity is to believe that humans on average have an inherent desire to do the right thing.

  • The drag is air against the whole body of the train, so you need vacuum everywhere.

    Assuming that you could build such a big vacuum there would be safety concerns. What if there's an accident in the tube? Does everyone in the train depressurize and die? Assuming people can survive and get out of the train car, now they're in a tube that's 100 miles long. How can you build emergency exits in a system designed to be as airtight as possible?

  • As someone who hasn't played much DnD, but has a bit of experience with other systems: What's the reason behind not splitting the party? Maybe it's just the mechanics and rules of the systems I have played, but splitting the party has led to cool emergent stories and opportunities for unexpected drama.

  • I've been trying to include failure techniques from DungeonWorld's suddenly ogres in my game. It proposes a few neat ideas for consequences of failure that are broadly applicable to many RPG systems.

    Eg, in the example above, maybe the Rogue (truthfully or not) blabs that their source was [ancient evil tome forbidden by the paladin's order]. Now the complication is not that the Paladin disbelieves the rogue's claim, but that they might question the rogue's true intentions.

    Edit: Or in the example given about landing a plane. An experienced pilot won't crash 1/20 times, but what if Air Traffic Control did a bad job managing things today? It will take 1h for the plane to be assigned to a gate, but you need to catch the train to Borovia in 1h15.

    An award winning surgeon rolls a 1 while giving a routine lecture? The presentation is so fucking boring that half the students fall asleep. Now the surgeon has to deal with the extra office hours of students who don't understand this part of the curriculum.

  • It's common in communities where rigid adherence to a set of beliefs is necessary to enforce cohesion. It's commonly used to avoid engagement with "Facts U Dislike" (haha) by terminating all meaningful discussion.

    Part of a flat earth forum and you're posting an experiment you performed that suggests the earth is round? You're spreading FUD that should be ignored.

    Posting on a crypto shitcoins discord about how this kinda looks like a scam and maybe it's not a good investment? That's also FUD. You're just mad that everyone else is going to be rich.

  • The stop button problem is not yet solved. An AGI would need a the right level of "corrigability": a willingness allow humans to stop it when undertaking incorrect behavior.

    An AGI that's incorrigible might take steps to prevent itself being shut off, which might include lying to its owners about its own goals/internal state, or taking physical action against an attempt to disable it (assuming it can).

    An AGI that's overly corrigible might end up making an association "It's good when humans stop me from doing something wrong. I want to maximize goodness. Therefore, the simplest way to achieve a lot of good quickly is to do the wrong thing, tricking humans into turning me off all the time". Not necessarily harmful, but certainly useless.

    https://www.youtube.com/watch?v=3TYT1QfdfsM

  • I think there are real concerns to be addressed in the realm of AGI alignment. I've found Robert Miles' talks on the subject to be quite fascinating, and as such I'm hesitant to label all of Elizier Yudkowsky's concerns as crank (Although Roko's Basilisk is BS of the highest degree, and effective altruism is a reimagined Pascal's mugging for an atheist/agnostic crowd).

    Even while today's LLMs are toys compared to what a hypothetical AGI could achieve, we already have demonstrable cases where we know that the "AI" does not "desire" the same end goal that we desire the "AI" to achieve. Without more advancement in how to approach AI alignment, the danger of misaligned goals will only grow as (if) we give AI-like systems more control over daily life.

  • how can an application ship with wayland?

    It can't. The title is not clear about how Firefox will "Ship with [support for] Wayland [compositors] by default". Previously this native support was limited to pre-release Firefox builds.

    What if the DE you’re using is on x11?

    Firefox continues to support X11.

  • Does the user running qbittorrent have write access to the downloads directory? Any special messages in the logs?

    You might also want to try running qbittorrent through docker. I use https://github.com/DyonR/docker-qbittorrentvpn. Just make sure that you set the PUID and PGUID to match a user id + group id that has r/w access to your downloads directory.