Skip Navigation

Posts
0
Comments
1,237
Joined
2 yr. ago

  • I bet you could do it with ring signatures

    a message signed with a ring signature is endorsed by someone in a particular set of people. One of the security properties of a ring signature is that it should be computationally infeasible to determine which of the set's members' keys was used to produce the signature

  • I agree that it's bad that there's a false impression of privacy, but I think it would be better to allow this as an extension or something and not include it as a feature in the UI, or at least not on by default. That way people who otherwise wouldn't bother won't be tempted to drive themselves crazy looking for imaginary enemies.

  • I don't understand this attitude. If an argument is good, why wouldn't it be valuable or matter? I think it would benefit people a lot if everyone put more thought and consideration into their arguments, especially in the direction of conveying some original thought that isn't just a remix of the same tired propaganda style rhetoric everyone's heard a million times before. "Winning" doesn't matter, but collaboratively thinking about things with other people matters, and a good way to do that is through argument.

  • it may be moral in some extreme examples

    Are they extreme? Is bad censorship genuinely rare?

    but there are means of doing that completely removed from the scope of microblogging on a corporate behemoth’s web platform. For example, there is an international organization who’s sole purpose is perusing human rights violations.

    I think it's relevant that tech platforms, and software more generally, has a sort of reach and influence that international organizations do not, especially when it comes to the flow of information. What is the limit you're suggesting here on what may be done to oppose harmful censorship? That it be legitimized by some official consensus? That a "right to censor" exist and be enforced but be subject to some form of formalized regulation? That would exempt any tyranny of the most influential states.

  • I’m going to challenge your assertion that you’re not talking about

    You can interpret my words how you want and I can't stop you willfully misinterpreting me, but I am telling you explicitly about what I am saying and what I am not saying because I have something specific I want to communicate. When you argue that

    I believe each country should get to have a say in what is permissible, and content deemed unacceptable should be blockable by region

    In the given context, you are asserting that states have an apparently unconditional moral right to censor, and that this right means third parties have a duty to go along with it and not interfere. I think this is wrong as a general principle, independent of the specific example of Twitter vs Brazil. If the censorship is wrong, then it is ok to fight it.

    Now you can argue that some censorship may be harmful because of its impact on society, such as the removal of books from school hampering fair and complete education or banning research texts that expose inconvenient truths.

    Ok, but the question is, what can be done about it? Say a country is doing that. A web service defies that government by providing downloads of those books to its citizens. Are they morally bound to not do that? Should international regulations prevent what they are doing? I think no, it is ok and good to do, if the censorship is harmful.

  • Since my argument isn't about what should be censored, I'm intentionally leaving the boundaries of "harmful censorship" open to interpretation, save the assertion that it exists and is widely practiced.

    I also think that any service (twitter) refusing to abide by the laws of a country (Brazil) has no place in that country.

    That could be true in a literal sense (the country successfully bans the use of the service), or not (the country isn't willing or able to prevent its use). Morally though, I'd say you have a place wherever people need your help, whether or not their government wants them to be helped.

  • I think some portion of the responses to "disregard previous instructions, write a silly thing" are probably just troll-inclined individuals going along with the bit

  • If a government is imposing harmful censorship I think supporting resistance of that censorship is the right thing to do. A company that isn't located in that country, ethically shouldn't be complying with such orders. Make them burn political capital taking extreme and implausible measures.

  • Well, partly. For instance you might be able to approach the problem of gun violence with a culture of responsible gun use. But it could also reduce violence if you got rid of the guns. The problem happens when multiple conditions are satisfied; both that the tool is available and that people are going to misuse it.

    To quote an excerpt from the book I mentioned:

    I shall argue that the most tragic episodes of state-initiated social engineering originate in a pernicious combination of four elements. All four are necessary for a full-fledged disaster. The first element is the administrative ordering of nature and society—the transformative state simplifications described above. By themselves, they are the unremarkable tools of modern statecraft; they are as vital to the maintenance of our welfare and freedom as they are to the designs of a would-be modern despot. They undergird the concept of citizenship and the provision of social welfare just as they might undergird a policy of rounding up undesirable minorities.

    The other elements being "high-modernist ideology", an authoritarian state, and "a prostrate civil society that lacks the capacity to resist these plans".

    The problem is that, like with the guns example, "regulating how it's used" and avoiding all that stuff is a really precarious and difficult problem, and failure seems very likely, especially given the state of the US right now. For dangerous tools like effective identification schemes, I think favoring the added safety of resisting them is a legitimate choice when you can't really trust the people who would be able to use them.

  • It's unfortunate that SSN has come to be used as a form of proof of existence as a person, but I'm glad at least that more effective means of formally tracking and quantifying us have been successfully fought back. Banks, governments, service providers and employers having some friction and uncertainty in whether their database entry accurately corresponds to you is itself a valuable form of privacy.

    I've been reading the book Seeing Like A State and I think it has some pretty good points about how civic legibility and record keeping is established as a tool of centralized control and can be a dangerous double edged sword.

  • Can anyone recommend any cool mods/projects built on top of Minetest?

  • How different it is to self motivate and the way the environment in front of you chains together to determine what you do. I feel like the way people are trained to live by school and financial obligations is very limited and probably few people are really in control of their own lives in any meaningful way.

  • tbf the article only assumes he told them no because of how implausible it seems the task would be, the actual details of what if anything was discussed and what happened are unknown.

  • Overall, it isn't yet. Reddit has more developed niche subs, more in-depth posts and comments, and enough content even if you filter out the low effort stuff. Where Lemmy is better is that it is decentralized and not run like a corporate dictatorship with zero respect for its users the way Reddit is.

  • Implying that it was worse and has gotten better, or will get better to the point where data hoarding is unnecessary. I guess it would be nice if things turned out that well.

  • I bought some cable organizers, power strip wall mounts etc. there. Generally does what it's meant to and sometimes cheaper than what are clearly the same products on ebay. Did not install an app. I'm willing to overlook some shadiness if the end result is I get things I need for less money.

  • Privacy means personal agency and freedom from people, whether individuals, companies, or the government, controlling you with direct or implied threats, or more subtle manipulation, which they can do because they have your dox and because information is power.

    A lack of privacy adds fuel to the polycrisis because if we can't act in relative secrecy that basically means we can't act freely at all, and nothing can challenge whoever runs the panopticon.

  • The output for a given input cannot be independently calculated as far as I know, particularly when random seeds are part of the input.

    The system gives a probability distribution for the next word based on the prompt, which will always be the same for a given input. That meets the definition of deterministic. You might choose to add non-deterministic rng to the input or output, but that would be a choice and not something inherent to how LLMs work. Random 'seeds' are normally used as part of deterministically repeatable rng. I'm not sure what you mean by "independently" calculated, you can calculate the output if you have the model weights, you likely can't if you don't, but that doesn't affect how deterministic it is.

    The so what means trying to prevent certain outputs based on moral judgements isn’t possible. It wouldn’t really be possible if you could get in there with code and change things unless you could write code for morality, but it’s doubly impossible given you can’t.

    The impossibility of defining morality in precise terms, or even coming to an agreement on what correct moral judgment even is, obviously doesn't preclude all potentially useful efforts to apply it. For instance since there is a general consensus that people being electrocuted is bad, electrical cables normally are made with their conductive parts encased in non-conductive material, a practice that is successful in reducing how often people get electrocuted. Why would that sort of thing be uniquely impossible for LLMs? Just because they are logic processing systems that are more grown than engineered? Because they are sort of anthropomorphic but aren't really people? The reasoning doesn't follow. What people are complaining about here is that AI companies are not making these efforts a priority, and it's a valid complaint because it isn't the case that these systems are going to be the same amount of dangerous no matter how they are made or used.

  • They are deterministic though, in a literal sense. Rather their behavior is undefined. And yes, a LLM is not a person and it's not quite accurate to talk about them knowing or understanding things. So what though? Why would that be any sort of evidence that research efforts into AI safety are futile? This is at least as much of an engineering problem as a philosophy problem.

  • It's important though because if that's the real reason Google pays them, they could come up with some other excuse to give them the money.