Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)PL
Posts
12
Comments
380
Joined
2 yr. ago

  • Like, each user is individually kicked off the PDS in reaction to some bad thing they did? Or labeling is reactive in that it labels bad stuff already posted, and each user has to pick labelers to listen to themselves?

    I'm not sure if Bluesky's front-end defaults to using some particular labelers. I know there's some moderation going on for you as soon as you log in, done by someone.

    But yes, each user has to choose whose moderation decisions they want to use, and they can't rely on everyone they can see also seeing exactly the same space they themselves are seeing. But I'm not sure it's possible or even desirable to get rid of the requirement/ability to choose your mods. I should be able to be in a community that has mods I trust, and the community chatting to itself and determining that so-and-so is a great mod who we should all listen to, and then all listening to them, sounds like a good idea to me.

    Being able to see and talk to people who aren't in the same space I'm in might not be as good?

  • Smartphones are great. Apps are user-hostile malware. Online spaces are, in the majority, traps. If every time you drove downtown you ended up in a corporate police state designed to play you and your friends off each other and make you all miserable so you look at more advertisements for shampoo, you would conclude that getting in the car is bad for you.

  • No?

    An anthropomorphic model of the software, wherein you can articulate things like "the software is making up packages", or "the software mistakenly thinks these packages ought to exist", is the right level of abstraction for usefully reasoning about software like this. Using that model, you can make predictions about what will happen when you run the software, and you can take actions that will lead to the outcomes you want occurring more often when you run the software.

    If you try to explain what is going on without these concepts, you're left saying something like "the wrong token is being sampled because the probability of the right one is too low because of several thousand neural network weights being slightly off of where they would have to be to make the right one come out consistently". Which is true, but not useful.

    The anthropomorphic approach suggests stuff like "yell at the software in all caps to only use python packages that really exist", and that sort of approach has been found to be effective in practice.

  • Echo chambers aren't that bad. I don't surround myself with people and things I like because the ones I don't like are going to hurt me, I do it because I don't like them and my life is too short to waste with their nonsense.

  • You can kick bigots off a Bluesky PDS.

    But letting everyone label accounts and posts and run feeds of moderation advice is a lot quicker at booting someone from the virtual space than waiting around for someone to come and decide that yes, so-and-so really has broken BigPDSHost policy and shall be deleted. It's also a great way to find who you want to boot.

  • I thought "where the hell does Twitch keep coming up with these absurd sex-related things to ban?" and it turns out it's just this one lady and inventing them is her shtick and she's single-handedly keeping like five journalists employed.

  • Zuckerberg Did Nothing Wrong

    I'm concerned that the narrative that what Facebook was trying to achieve here was wrong or bad is itself user-hostile, and pushes in favor of the non-fiduciary model of software.

    Facebook paid people to let them have access to those people's communications with Snap, Inc., via Snapchat's app. This is so that Facebook could do their analytics magic and try and work out how often Snapchat users tend to do X, Y, or Z. Did they pay enough? Who knows. Would you take the deal? Maybe not. Was this a totally free choice without any influence from the creeping specter of capitalist immiseration? Of course not. But it's not some unusually nefarious plot when a person decides to let a company watch them do stuff! Privacy isn't about never being allowed to reveal what you are up to. Some people like to fill out those little surveys they get in the mail.

    Now, framing this as Facebook snooping on Snapchat's data concedes that a person's communications from their Snapchat app to Snapchat HQ are Snapchat's data. Not that person's data, to do with as they please. If the user interferes with the normal operation of one app at the suggestion of someone who runs a different app, this framing would see that as two apps having a fight, with user agency nowhere to be found. I think it is important to see this as a user making a choice about what their system is going to do. Snapchat on your phone is entirely your domain; none of it belongs to Snap, Inc. If you want to convince it to send all your Snapchat messages to the TV in Zuckerberg's seventh bathroom in exchange for his toenail clippings, that's your $DEITY-given right.

    User agency is under threat already, and we should not write it away just to try and make Facebook look bad.

  • I don't think that's true. For one thing, it's easy to buy a car from a random person, without granting any permission to any car company to download stuff from your car and sell it. If a car company were to access your car without permission, you could sue for damages (see OP).

  • The Uplift series isn't really that third picture but I can kind of see why this person did the cover art.

    The last one actually reminds me of Saturn's Children where the cover is like a worse CGI version of that, but the book actually thinks it through in a way that makes this person a compelling point of view character.

  • I read them all. I think I liked the first book fine, it's more of a self-contained mystery, which might be better. The aliens are probably most prominent in the second trillogy; there's loads of them and I quite like the Commons of Jijo.

    I feel like the series is sort of missing pieces? Like, across the five books it is in, WTF was going on with Streaker's discovery is never really explained, the whole the-galactics-aren't-being-honest-with-us thread is never satisfyingly resolved in the whole series, and at several points in the chronology it feels like there could have been a whole book about the stuff that happened since the last book.

    The whole series is An Aesop on how science is good. Which is fine, doing science is good and you can spend a series reminding people of that if you would like. But it's strange to find that as the point of a series that otherwise seems to have all these frankly conservative ideas about colonizing space planets and about some people being just inherently more or less "uplifted" than others. Uplift seems to stand in for a person's moral value without what I would consider sufficient critique. Like, paternalism is bad when the galactics do it, but when humans just have full power over a dolphin person's entire life that's fine somehow, you need it to do Uplift, the thing the books are about. The whole Uplift concept has unavoidable parallels to European notions of "civilizing" people by using military force to make them act more like Europeans, which I don't think are fully examined.

    I also remember them as having weird 1980s gender ideas in them, like the men are normal and the women are viewed through some weird filter and the other gender humans are entirely absent.

    I think there are more interesting books to read about the structure of minds and the diversity of subjective experience. For example, Diaspora only comes out a year after Heaven's Reach, and also has all sorts of weird aliens, but it additionally has defensible gender politics and a much more cogent thesis on autonomy and what the powers of science may or must be used to do. Or, A Half-Built Garden is all about what happens when galactic society arrives to save the humans, and the humans maybe finally don't need saving.

  • Plastic at the microscopic level, if they aren't doing anything chemically interesting, really ought to function about like "rock, but light". Most organisms don't run into trouble because there are tiny bits of rock in the world, so I would expect tiny bits of plastic not to be a huge problem. Which is sort of backed up by how we have noticed microplastics everywhere and we haven't seen huge problems resulting from it (most people are still alive, most children still develop to adulthood, etc.).

    But it's entirely possible that some of these plastics are not chemically inert, and that they emit chemicals that do exciting and unwanted things in people's bodies. If we can't keep our plastics from becoming microplastics, we probably need to discontinue the manufacture of non-implantable plastics, since all the plastics will end up in someone's body at some point.

    And it's also possible that the microplastics physically do do something interestingly bad. I think there was a recent study to this effect on heart disease. But at this point, that's the question we need to be asking. How many or what kind of microplastics does it take to give a ferret epilepsy? Not "are there microplastics in my all brands of peanut butter?"