So I want to build a system where my AI agents can prove they did something correctly without revealing all the juicy details. Think of it like showing your math teacher that you solved the problem right, but without showing your messy work. That's zk-SNARKs in a nutshell.
The Big Picture
I'm building an MVP that lets agents create cryptographic proofs of their actions. The beauty? Auditors can verify these proofs without seeing sensitive data. It's like having a digital notary that never lies.
Step 1: Picking a Simple Action
I need to start small. Really small.
// Simple action: increment a counter
let action = "Agent increments counter from 5 to 6"
Public parts: agent_id
, action_type
Private parts: initial_value
, final_value
(the witness)
The constraint is dead simple: final_value == initial_value + 1
Step 2: Choosing My Rust Library
Two solid options:
ark-groth16
: Simple for fixed circuits, needs trusted setuphalo2
: More flexible, universal setup, steeper learning curve
// My circuit constraint
constraint: final_value == initial_value + 1
I'll go with whatever has better docs and examples for rapid prototyping.
Step 3: Building the Circuit
This is where the magic happens. I'm defining the mathematical rules that prove my action is valid.
impl Circuit {
fn synthesize(&self) -> Result<(), SynthesisError> {
// Define: final_value == initial_value + 1
enforce_constraint(final_value, initial_value + 1)
}
}
No room for bugs here - they break everything.
Step 4: Trusted Setup (Development Only)
For Groth16, I need some special keys. Think of it as creating a shared secret handshake.
let (proving_key, verifying_key) = setup_ceremony()
// WARNING: This is NOT production-ready!
This local setup is just for testing. Real production needs a proper ceremony.
Step 5: The Prover Module
This is my agent's brain - it generates proofs.
fn generate_proof(log_entry: LogEntry) -> Proof {
let witness = extract_private_data(log_entry)
let public_inputs = extract_public_data(log_entry)
return prove(circuit, witness, public_inputs, proving_key)
}
One proof per action. No fancy aggregation yet.
Step 6: The Verifier Module
This is my auditor - it checks if proofs are legit.
fn verify_proof(proof: Proof, public_inputs: PublicData) -> bool {
return verify(proof, public_inputs, verifying_key)
}
Simple output: "Valid" or "Invalid". No gray areas.
Step 7: Storage (Keep It Simple)
For now, just dump everything in a file.
storage.save(public_data, proof)
// No need for fancy databases yet
The proof guarantees integrity, so I just need availability.
Step 8: Test Everything
This is where I make sure I didn't mess up.
// Unit tests
assert!(circuit.test_valid_action() == true)
assert!(circuit.test_invalid_action() == false)
// Integration tests
let proof = prover.generate(valid_action)
assert!(verifier.verify(proof) == true)
If invalid actions pass verification, I've got problems.
Step 9: Error Handling
match generate_proof(action) {
Ok(proof) => println!("Proof generated successfully"),
Err(e) => eprintln!("Failed to generate proof: {}", e)
}
Clear success/failure messages. No cryptic errors.
What I End Up With
A working system where:
- My agent does something (increment counter)
- Generates a cryptographic proof
- Auditor verifies the proof without seeing private data
- Everyone's happy
It's not fancy, but it proves the concept. From here, I can add more complex actions, better performance, and production-ready security.
The key insight? Start stupidly simple. Get the cryptography right first. Everything else can be optimized later.
Note: This MVP isn't going to win any awards, but it's my first step into zero-knowledge proofs. And honestly? That's pretty boring.
Why is ths boring to me
Well, truly zero knowledge systems work like a charm and magic, under the hood they have a significant drawbacks. Assuming I'd use this concept to existing agentic automations, let's say "I'd want to know about patient X's medical intel without without knowing his intel, meaning respecting his or her's privacy, I'd have to perform z-SNARK computation (eg. 3 color graph problem) repeatedly for (n) times, it would be an goddamn performance overhead and can take upto ages.
Mathematical conclusion in terms of Time Complexity
Proof Generation
- O(n.log n) to O(n) depending on schema
Verification
- O(1) to O(n.log n)
Proof size
- ~128 to ~256 bytes (for Groth16)
Agentic AI Logs
- Let L = number of log entries
- Let S = size of each log entry (in bits/fields)
- Let C = total constraints to encode decision validity (logic, rules, traceability)
Combined Complexity (zSNARK + Agentic AI Logs)
- Prover: O(Total constraints)=O(L⋅S⋅R)
- Verifier: O(1) or O(logC)