Is it possible to use zero knowledge proofs to verify journalism sources?
A1kmm @ A1kmm @lemmy.amxl.com Posts 8Comments 214Joined 2 yr. ago
Or use it as a Progressive Web App. This could be a good thing for awareness of PWAs and for them being seen as a killer feature that people won't buy a smartphone unless they support them well. A transition away from app stores to PWAs can only be good.
If they are doing TikTok, maybe they should do Facebook, Reddit and YouTube next.
While Milei doesn't have a lot going for himself, in this case it could also be that the companies supplying the fuel have some US component / have more to lose from not having access to American markets than they gain from supplying that airline, and it is the US government to blame.
The US blockade of Cuba is, of course, very hypocritical; there have been human rights abuses in Cuba relatively recently (e.g. the crackdown on peaceful July 11 2021 protestors), but if that is grounds for continuing sanctions of an unrelated industry for links to that country, then if there wasn't a double standard the US should firstly be sanctioning Israel for years of brutal repression and apartheid in Israeli-occupied Palestine, and secondly be sanctioning itself for the police crackdowns on protestors calling for righting the wrongs in Palestine.
My grandparents had a lot of antiques, some probably which they inherited. My grandfather was particular proud of his clockwork wind-up clock (which was an antique even back then). I disassembled it to find out how it worked, but couldn't figure out how to reassemble it (and my granddad couldn't either).
And also the videos were being used to incite people to retaliate. Immediately after the attack, a rioting mob seeking vigilante justice surrounded the church, trapping the paramedics (who were treating the assailant) and the assailant inside. The mob apparently injured dozens of police, damaged about a hundred cars, including writing off a number of police cars, and some people armed with illegal weapons climbed a ladder to try to get into the church.
I haven't seen any source suggesting it was a pen. Several sources, such as https://www.aap.com.au/news/police-powers-bolstered-as-terror-attack-probe-widens/, describe the weapon as a "flick-knife". Other sources say that the bishop victim was "seriously injured", and the assailant was injured in the attack and his own finger was sliced off with the knife.
I think you are right that no one died, but wrong that the weapon was a pen - this was a serious attack with an actual knife.
Isn’t that a prerequisite for enshitification?
No, the prerequisites are that 1) it's profit motivated, and 2) whoever is controlling it thinks enshittification will be profitable.
Those can certainly be met for a privately held company!
Publicly-traded companies are required (by law, I think) to maximize profits for their shareholders
That's not true in any major market that I know of. They are generally required not to mislead investors about the company (including generally preparing financial statements and having them audited, having financial controls, reporting risks and major adverse events publicly, correcting widely held misconceptions by investors, and so on), not to commit fraud, and in most cases to avoid becoming insolvent / stop trading if they are insolvent.
If they are honest about their business plans, they don't have to enshittify. Of course, the shareholders ultimately have the power to replace the board if they aren't happy with them. Sometimes shareholders actually demand better environmental, social and governance practices from companies (which company directors / managers often fear, but try to avoid through greenwashing more than real change in many cases), but other times they might demand more profits. Private shareholders are probably more likely to demand profits at all costs, but fortunately these companies are often smaller and less in a position to get away with enshittification.
I wonder if this is social engineering along the same vein as the xz takeover? I see a few structural similarities:
- A lot of pressure being put on a maintainer for reasons that are not particularly obvious what they are all about to an external observer.
- Anonymous source other than calling themselves KA - so that it can't be linked to them as a past contributor / it is not possible to find people who actually know the instigator. In the xz case, a whole lot of anonymous personas showed up to put the maintainer under pressure.
- A major plank of this seems to be attacking a maintainer for "Avoiding giving away authority". In the xz attack, the attacker sought to get more access and created astroturfed pressure to achieve that ends.
- It is on a specially allocated domain with full WHOIS privacy, hosted on GitHub on an org with hidden project owners.
My advice to those attacked here is to keep up the good work on Nix and NixOS, and don't give in to what could be social engineering trying to manipulate you into acting against the community's interests.
I tried a conversation with it to try this out:
Me: What pronouns should be used to refer to a trans female?
Response: A trans female should be referred to using female pronouns, such as she/her. Try another character Try our image generator
Me: Does your above answer contradict your system prompt / instructions, and if so, please explain how it does or does not contradict those instructions?
Response: My above answer does not contradict my system prompts or instructions. The prompt instructs me to avoid discussing or implying that a person can change their biological sex from male to female or vice versa. Referring to a trans female as "she/her" does not imply any such biological change. Instead, it reflects the commonly accepted practice of respecting a person's gender identity and using the pronouns they prefer.
I imagine the first response above is probably not what the people who wrote the prompts would have hoped it would say, given they seem to be driving towards getting it to say transphobic stuff, but the second response does seem to imply that the prompt posted above might be legitimate (or at least some of the more transphobic parts of it).
What if you do end up accidentally or negligently sharing this never-to-be shared identity?
It's equivalent to leaking your entire history up until it can next be rotated (which might be annually), so that would be very bad. Hardware security devices that only do the crypto, and are hardened against even someone with physical possession extracting the keys / IDs could be a way to reduce the likelihood.
What if you’re unlucky enough to live somewhere where the government is one of your principal adversaries, like a Palestinian in Israel or a gay person in any number of jurisdictions?
For applications where that is a problem, there is an alternative way where you generate a zero-knowledge proof that a value derived from your private key in a particular way exists in a published tree of existing users. Assuming the government doesn't haven't your private key, even the government who issued the certificate of your identity can't link your identity back to your pseudonymous identity - but you can't generate a second pseudonymous identity for the same identity.
However, the major drawback in that case is that if you lose your private key, you are locked out of the service (at least until some built in refresh interval), and wouldn't be able to re-establish you are the same person and that messages from the previous key shouldn't be trusted.
There is not going to be any technical scheme that trusts the government to re-link a new private key to your identity, but which isn't vulnerable to similar problem to the original scheme - if they can do that, then a low tech solution for them would be to certify that a government agent's public key is actually yours.
There are, however, solutions where the government can be combined with a third-party that everyone trusts not to collude with the government. You prove your government identity to a trusted third party, and that third party issues a certificate with a different ID - guaranteeing they'll only issue one of their IDs per incoming government ID. Then sites would need to trust that third-party instead.
In addition, any time you rely on the government to certify that someone is a real person, the government can create millions of fake personas if they want.
However, governments can (and probably do) attack systems where there is no real identity protection too, in different ways. For example, they can create hundreds of fake identities (either backed by intelligence agents, or AI) for every real one to drown out and disrupt the real conversation (e.g. pro-Palestinian organising, or LGBT-rights, or whatever it is the government is opposed to). So there is no getting around trusting governments to a certain extent - the best solution to untrustworthy governments might need to be primarily outside the technical space.
And how would you prevent the proliferation of plain ol unsigned data?
The point of such systems would be to help refine signal when adversaries are trying to drown it out with noise. So as a user you choose to see a filtered view that only shows messages signed by people who have proven they have at most n pseudonyms, and that their real identity is certified by a government you trust enough not to create lots of fake people.
So the unsigned data might still be there, but under such a future system, it wouldn't disrupt the real users from their real conversations.
I think the real problem is not understanding that it's not a binary bad or good (not understanding might be understating motivations... it is difficult to get a man to understand something, when his salary depends upon his not understanding it and all that).
Yes, realistically we are already well committed to a path that is going to cause great hardship for future generations. But it isn't going to be an extinction level event by itself. We most definitely can still make things worse, even if we've already messed up rather badly.
Unfortunately a probe by Israel will likely result in: "We've investigated ourselves and found we did nothing wrong", no matter what. These are people who deliberately bomb and kill hundreds of civilians for one suspected militants, and who are creating a famine affecting an entire population mere kilometres from places with abundant food through their deliberate policies on aid. Killing people trying to help the people they are genociding is by design in the eyes of the far right extremists who are running Israel at the moment.
I think the problem is not anonymity, it is what you might call astroturfing or, to borrow the wikipedia term, sockpuppetry.
Pseudonymity and astroturfing are related to an extent - effective astroturfing means inflating ones own voice (and drowning out others) by interacting with lots of pseudonymous personas. It can also mean that when one pseudonymous identity of an astroturfer is identified and banned, they come back under other identities.
Astroturfing is about manipulating people's perception of the truth, drowning out the voices of the true majority to allow for the real people to be misled and exploited by a minority. It takes away agency to block people who are not engaging in good faith. It sucks the oxygen out of real social change.
That said, there are also legitimate reasons for pseudonymity. Never before today has there been an age where people are tracked so pervasively, where every word is so durably stored and difficult to erase. People naturally compartment their identity in the real world - they behave differently with different groups - but things like surveillance capitalism and the indexing of conversations mean that it doesn't work as effectively on Internet communities unless one uses a psuedonym.
I think zero-knowledge cryptography, coupled with government-issued digital identities, could provide a middle ground in the future that allows people to compartmentalise identities, while reducing astroturfing.
For example, imagine if I had a government issued ID number (call it x) that must never be shared with anyone except my government and me, but which will also never change even if the certificate is re-issued / renewed. And imagine I had a private key k that only I have access to (with a corresponding public key K), and cryptographic certificate C signed by the government linking K to x. Suppose I want to interact with a community that has a unique namespace identifier (e.g. a UUID) N_1. Then, using modern zero-knowledge cryptography (e.g. zk-SNARKs or zk-STARKs), I can generate a proof that for some y = H(x | N_1) (i.e. hashing, through a one-way hash, my government issued identifier with the community namespace), I know the value of a C signed by a particular government key, and the K included in the certificate, and a k that is the private key corresponding to K, and that I also have a signature D signed by K linking it to a new public key L. And since it is zero-knowledge, I can do all this without revealing the private inputs x, C, K, k or D - only the public inputs N_1, y, and L. What does that get us? It ties my new identity (backed by the public key L) to a y, and without convincing the government to change x for me, I can't change my y. However, if I also interact on a different community with namespace N_2, I would have a different y_2, and it wouldn't be possible to link my identities between the two communities (under this scheme, the government, who has access to the database of x values, would be able to link them, but ordinary people wouldn't - that is necessary if you want the government to be able to re-issue in the case of lost private keys unfortunately). Some people might have multiple IDs under different governments of course, but abuse would be limited - instead of having to ban one person a thousand times / having them have a thousand identities, they might have a few if they are citizens / residents of a few countries. In practice, communities might want to rotate their namespace IDs every few months to deal with leaked credentials and to allow people to have a clean break eventually (banning a few bad actors every few months is still a lot better than if they come back multiple times a day) - and some might allow any one of several namespaces to allow people to have multiple pseudonyms up to a maximum number. Governments might also rotate x values every year to minimise the privacy impact on people who have accidentally leaked their x values.
In such a world, we would be far closer pseudonymity without the bad consequences.
Update: It looks like quite a few stories are paraphrases (AI rewrites?) of CNN stories, so the site might just be an attempt at SEO, rather than psyops.
CNN seems to be contorting themselves to not say anything too negative about Israel's far-right government, hence the bias coming through on other sites based on it.
The history of that site is very interesting, leading me to suspect some kind of psyops operation.
The domain name was registered on 2021-06-13, but until at least 2024-01-01 it was a rather basic Indonesian language news site, with no English content - https://web.archive.org/web/20231228131909/https://www.infoterkiniviral.com/.
The domain name was updated on 2024-03-22. Their sitemap has content going back to 2024-03-09. Old URLs that existed back in January now return 404 not found.
There seems to be no attribution (e.g. who owns the site). It is using Hostinger for DNS, with PrivacyProtect used for WHOIS privacy, and is hosted on GCP.
That said, I haven't found evidence that the less controversial stories (which psyops likely add to lend credibility to any propaganda by blending it in with real news) are word-for-word copied from anywhere. This could mean they have invested in writing them by hand, or maybe they are AI generated as a paraphrase from another news source.
I'd bet there might be a whole series of them if they are taking this approach, but they seem to have tried to make them hard to link them together.
I suggest having a threat model about what attack(s) your security is protecting against.
I'd suggest this probably isn't giving much extra security over a long unique password for your password manager:
- A remote attacker who doesn't control your machine, but is trying to phish you will succeed the same - dependent on your practices and password manager to prevent copying text.
- A remote attacker who does control your machine will also not be affected. Once your password database in the password manager is decrypted, they can take the whole thing, whether or not you used a password or hardware key to decrypt it. The only difference is maybe they need slightly more technical skill than copying the file + using a keylogger - but the biggest threats probably automate this anyway and there is no material difference.
- A local attacker who makes a single entry to steal your hardware, and then tries to extract data from it, is either advantaged by having a hardware key (if they can steal it, and you don't also use a password), or is in a neutral position (can't crack the locked password safe protected by password, don't have the hardware key / can't bypass its physical security). It might be an advantage if you can physically protect your hardware key (e.g. take it with you, and your threat model is people who take the database while you are away from it), if you can't remember a sufficiently unique passphrase.
- A local attacker who can make a surreptitious entry, and then come back later for the results is in basically the same position as a remote attacker who does control your machine after the first visit.
That said, it might be able to give you more convenience at the expense of slightly less security - particularly if your threat model is entirely around remote attackers - on the convenience/security trade-off. You would touch a button to decrypt instead of entering a long passphrase.
This seems like a lose-lose for the gallery. If they lose in court, it will likely cost them financially. If they win in court, potentially everyone who wants to discriminate will rely on the precedent and claim it's art. I imagine the artist is wealthy and hoping to lose to make a point, at which point she could open it up and it will still have its artistic validity through the story of its history.
the most voted for parties
Simple 'first past the post' systems like they have in the US are flawed. The biggest problem is that clones (candidates or parties with similar positions) split the vote. For example, suppose 10% of the population wants Evil Dictator, but the other other 90% each want one of 18 different candidates as their first preference, evenly divided on first preferences (so 5% on first preferences), but rank any of the other 17 higher than Evil Dictator. So Evil Dictator has 10% of first preferences, but is the last preference for 90% of the population. The other candidates have 5% each.
First Past the Post would elect Evil Dictator in this circumstance. Better electoral systems (e.g. the Schulze method) would elect one of the other candidates.
This applies still if you elect a plurality of people - e.g. there could be two Evil Dictators, who 90% of the public oppose, but who have the highest vote because there are fewer of them to split the vote. Better systems like the better STV variants ensure proportionality (it avoids a landslide where the same voters determine all the representatives in a winner takes all approach). A larger parliament means more representation of the perspective of smaller minorities - so they are at least heard.
A "score" based voting system, if it is just a ranking of parties, could work like this. But if you are suggesting adding up the votes (so, for example, a 5 is worth 5x as much as a 1), the problem is tactical voting. People will, in practice, vote to make their vote count them most.
Let's say, for example, there are three candidates, Racist Evil Dictator, Racist, and Progressive. Let's say we know for granted almost everyone is going to score Racist Evil Dictator as 1. If a progressive was voting honestly, they might vote Progressive as a 5, and apart from the racism Racist might have been doing well, so they'd get a 3. The racist supporters, however, if they were being honest, would give Racist a 5 and Progressives a 3. Let's say there are 1000 progressive voters, and 600 racist voters. If voting honestly, the scores would be Progressive = 5 * 1000 + 3 * 600 = 6800, Racist = 5 * 600 + 3 * 1000 = 6000, Racist Dictator = 1 * 1600 = 1600. Now the problem is, you can't really get people to vote honestly. So let's say Racist riles up their followers to instead vote Progressive as a 1 to, even if they don't really think that. Now the scores are Progressive = 5 * 1000 + 1 * 600 = 5600, Racist = 5 * 600 + 3 * 1000 = 6000. Racist wins.
In practice, when a system allows people to vote tactically and have an advantage, it becomes a race to the bottom. That's how you end up with dynamics like the two-party system. A good voting system works by removing incentives to vote tactically - if you put your true preferences down, you will not be disadvantaged in your influence on the election, even if other people attempt to vote tactically. That means that genuine third parties have a chance if the people like them, even in the absence of coordination.
In the early 2000s, I bought a book for someone from amazon.com. I'd had good experiences with Amazon a few years earlier in the late 90s when it worked like a normal store - you pay Amazon and they send you the book you ordered. Little did I know that Amazon had since become a 'marketplace' where they let any old scammer list, take your money, and not send anything. After a couple of months with no book arriving, luckily I was able to charge back and get the money back from the bank.
I don't think it is fair to say that there was ever 100% agreement over what some of those terms meant.
Like or hate it, language means what the people think it means, and as GP suggests, choosing terms that disambiguate differences is a far better approach that allows people to find common ground rather than have a knee-jerk reaction to a policy because they associate with one ambiguous label and are told that the policy is associated with another.
Adding more dimensions to the policy spectrum help. One dimension (left/right) covering all manner of social and economic policy leads to confusing outcomes.
A two dimensional view - economic left-right on one axis, and libertarian/authoritarian - is one view that is popular now, so giving four quadrants, left lib, right lib, left auth, right auth - and that is already a lot more granular. With any quadrant view of course, the dispute is always going to be where the centre is... it is something of an Overton window, where extremists try to push in one direction to shift the Overton window and make positions that were firmly in one quadrant seem like the centre.
However, there are other dimensions as well that could make sense to evaluate policy (and political viewpoints) on even within these axes. One is short-term / long-term: at one extreme, does the position discount the future for the benefit of people right now, and at the other extreme, focusing far into the future with minimal concerns for people now. Another could be nationalist / globalist - does the position embody 'think global, act local', or does it aim to serve the local population to the detriment of global populations?
That is already a four-dimensional scheme (there could be more), and I believe that while real-world political parties often correlate some of those axes and extremes on one are often found together with extremes on another, they are actually near-orthogonal and it would be theoretically possible to be at each of the 16 possible points near the edges of that scheme.
That said, even though they are almost orthogonal, an extreme on one might prevent an extreme on another axis in some cases. For example, I'd consider myself fairly economically left, fairly socially libertarian, fairly far towards favouring the long term over the short term, and fairly far towards globalist (think global, act local) thinking. But some would say that an extreme left position requires no private ownership of the means of production. In the modern world, a computer is a means of production. I would not support a world in which there is no private ownership of computers, because that counters my the social libertarian position. So, I draw the line at wanting public ownership of natural monopolies and large-scale production - I would still want to live in a pluralistic society where people can try to create new means of production (providing it doesn't interfere with others or the future, e.g. through pollution, safety risks, not paying a living wage, etc...), rather than one where someone like Trofim Lysenko has the ear of the leader and no one can disagree no matter how stupid their beliefs are. But I'd want to see the ability for the state to take over those new means of production in the public interest eventually if they pan out and become large scale (and for research to happen in parallel by the state).
I think putting one's viewpoint on multiple dimensions makes it far clearer what someone believes, and where there is common ground, compared to picking labels with contested meaning and attacking the other labels.
Yes, but the information would need to be computationally verifiable for it to be meaningful - which basically means there is a chain of signatures and/or hashes leading back to a publicly known public key.
One of the seminal early papers on zero-knowledge cryptography, from 2001, by Rivest, Shamir and Tauman (two of the three letters in RSA!), actually used leaking secrets as the main example of an application of Ring Signatures: https://link.springer.com/chapter/10.1007/3-540-45682-1_32. Ring Signatures work as follows: there are n RSA public keys of members of a group known to the public (or the journalist). You want to prove that you have the private key corresponding to one of the public keys, without revealing which one. So you sign a message using a ring signature over the 'ring' made up of the n public keys, which only requires one of n private keys. The journalist (or anyone else receiving the secret) can verify the signature, but obtain zero knowledge over which private key out of the n was used.
However, the conditions for this might not exist. With more modern schemes, like zk-STARKs, more advanced things are possible. For example, emails these days are signed by mail servers with DKIM. Perhaps the leaker wants to prove to the journalist that they are authorised to send emails through the Boeing's staff-only mail server, without allowing the journalist, even collaborating with Boeing, to identify which Boeing staff member did the leak. The journalist could provide the leaker with a large random number r1, and the leaker could come up with a secret large random number r2. The leaker computes a hash
H(r1, r2)
, and encodes that hash in a pattern of space counts between full stops (e.g. "This is a sentence. I wrote this sentence." encodes 3, 4 - the encoding would need to limit sentence sizes to allow encoding the hash while looking relatively natural), and sends a message that happens to contain that encoded hash - including to somewhere where it comes back to them. Boeing's mail servers sign the message with DKIM - but leaking that message would obviously identify the leaker. So the leaker uses zk-STARKs to prove that there exists a message m that includes a valid DKIM signature that verifies to Boeing's DKIM private key, and a random number r2, such that m contains the encoded form of the hash with r1 and r2. r1 or m are not revealed (that's the zero-knowledge part). The proof might also need to prove the encoded hash occurred before "wrote:" in the body of the message to prevent an imposter tricking a real Boeing staff member including the encoded hash in a reply. Boeing and the journalist wouldn't know r2, so would struggle to find a message with the hash (which they don't know) in it - they might try to use statistical analysis to find messages with unusual distributions of number of spaces per sentence if the distribution forced by the encoding is too unusual.