Doom NPCs with Zero-Knowledge Proofs
Doom NPCs with Zero-Knowledge Proofs
Doom NPCs with Zero-Knowledge Proofs - EZKL Blog
Doom NPCs with Zero-Knowledge Proofs
Doom NPCs with Zero-Knowledge Proofs - EZKL Blog
Isn't this still subject to the same problem, where a system can lie about its inference chain by returning a plausible chain which wasn't the actual chain used for the conclusion? (I'm thinking from the perspective of a consumer sending an API request, not the service provider directly accessing the model.)
Also:
Any time I see a highly technical post talking about AI and/or crypto, I imagine a skilled accountant living in the middle of mob territory. They may not be directly involved in any scams themselves, but they gotta know that their neighbors are crooked and a lot of their customers are gonna use their services in nefarious ways.
The model that is doing the inference is committed to before hand (it's hashed) so you can't lie about what model produced the inference. That is how ezkl, the underlying library, works.
I know a lot of people in this cryptography space, and there are definitely scammers across the general "crypto space", but in the actual cryptography space most people are driven by curiosity or ideology.
I don't understand what is exactly being verified there? Model integrity? Factors for "reasoning"?
Integrity of the model, inputs, and outputs, but with the potential to hide either the inputs or the model and maintain verifiability.
Definitely not reasoning, that's a whole can of worms.
But what is meant by "integrity of the model, inputs and outputs"?
I guess I don't understand the attack vector, what's the threat here? Someone messes with the model file or refines a model towards a specific malicious bias like inserting scam links where legit links would go and passes it off as the real deal?
I'm more general cybersec than crypto so idk but isn't that what hash sums are for?
Surely if someone messed with my .ckpt or .safetensors it won't be the same file anymore?
And what does that have to do with validity of the inputs?
Hey can someone dumb down the dumbed down explanation for me please?
AI is a magical black box that performs a bunch of actions to produce an output. We can’t trust what a developer says the black box does inside without it being completely open source (including weights).
This is a concept for a system where the actions performed can be proved to those who don’t have visibility inside the box to trust the box is doing what it is saying it’s doing.
An AI enemy that can prove it isn’t cheating by providing proof of the actions it took. In theory.
Zero Knowledge Proofs make a lot of sense for cryptography but in a more abstracted sense like this, it still relies on a lot of trust that the implementation generates proofs for all actions.
Whenever I see Web3, I personally lose any faith in whatever is being presented or proposed. To me, blockchain is an impressive solution to no real problem (except perhaps border control / customs).
Zk in this context allows someone to be able to thoroughly test a model and publish the results with proof that the same model was used.
Blockchain for zk-ml is actually a great use case for 2 reasons:
The way AI is trained today creates a black box solution, the author says only the developers of the model know what goes on inside the black box.
This is major pain point in AI, where we are trying to understand it so we can make it better and more reliable. The author mentions that unless AI companies open source their work, it's impossible for everyone else to 'debug' the circuit.
Zero knowledge proofs are how they are trying to combat this, using mathematical algorithms they are trying to verify the output of an AI model in real time, without having to know the underlying intellectual property.
This could be used to train AI further and increase the reliability of AI drastically, so it could be used to make more important decisions and adhere much more easily to the strategies for which they are deployed.
Thanks for the 'for dummies' explanation.