AI Audit Log Proof

Audited ≠ explainable

AI decisions in financial and public sectors lose their rationale when models are updated. EU AI Act and ISO 42001 demand explainable operations, but plaintext logs alone cannot prove decision immutability. Lemma attaches ZK proofs to AI decision attribution, preserving a structure where past decision rationale remains traceable even after model changes.

P2 Verifiable AI Financial services · Insurance · Healthcare AI 5 min read

Problem

AI is embedded in consequential decision-making: corporate credit approvals, insurance premium calculations, clinical triage, public benefit eligibility — all domains where retrospective accountability is legally required.

Current audit practices rely on plaintext decision logs and model version numbers. Three structural defects:

  • Plaintext logs are mutable. There is no cryptographic mechanism to prove whether logs have been altered.
  • Model updates make past decisions irreproducible. Complete weight state preservation and exact re-execution against historical inputs is operationally infeasible.
  • No third-party verification path exists. The ability to verify "this model decided this, on these grounds" without disclosing data is absent from standard MLOps.

The EU AI Act (explainability requirements for high-risk AI, enforcement 2026), ISO 42001 (AI management system certification), and FSA AI governance guidelines — all quietly shift the requirement from "logs exist" to "tamper-proof grounds are provable."

Scenario

September 2026. A regional bank's corporate lending AI denies a small business loan application.

The following year, the business files an appeal: "Wasn't the decision unfair by current standards?"

The problem: the model was updated in December 2026. Which features received which weights, which internal guideline drove the denial — accurately reproducing this is difficult. What remains is a simple log entry: "Model v2.3 / Denial score 0.71 / Reason code C-04." Inside the bank, a quiet consensus forms: "We have logs, but we cannot explain."

With Lemma, the following proofs would have been sealed at the moment of decision:

  • Model identifier and hash
  • Input attribute docHash and CID (identity with originals)
  • Cryptographic signature of the applied internal guideline
  • Attribution of the final decision

Against the appeal, the bank could prove: "On September 18, 2026, model A-v2.3, following guideline G-2026-04, based on input attribute category X, produced this decision" — without disclosing the underlying data. The regulator, audit firm, and appellant can all independently verify the same proof.

No matter how many times the model is updated, the structure of past decisions remains immutable.

Architecture

Lemma's four cryptographic layers correspond to the AI decision lifecycle.

1. ENCRYPT — Sealing at Decision Time

Input attributes, model state, and applied guidelines are encrypted with AES-GCM at the moment the AI decision is produced. Originals remain under the issuer's control; only docHash and CID are exposed externally.

2. PROVE — ZK Proof Generation

The proposition "this model identifier, against this input attribute hash, following this guideline, produced this decision" is sealed as a proof on a ZK circuit. Weights and input values are excluded; only decision integrity is proven.

3. DISCLOSE — Selective Attribute Disclosure

At audit time, attributes are selectively disclosed according to the verifier's authority. The regulator sees "guideline G-2026-04 applied" and "input attribute category"; the appellant sees only "final decision" — all enforced with issuer signatures.

4. PROVENANCE — Permanent Record

docHash, CID, ZK proof, and model identifier are anchored on-chain. Even if RAG indexes, models, and operational infrastructure are entirely replaced, the cryptographic identity of the decision remains permanently verifiable.

┌──────────────────────────────────────────────────────────┐
│  AI Inference Engine                                      │
│  Input → Model(v2.3) → Decision (deny / approve / score) │
└───────────────────────┬──────────────────────────────────┘
                        │ Decision event

┌──────────────────────────────────────────────────────────┐
│  ENCRYPT (AES-GCM)                                       │
│  • Encrypt input attributes                               │
│  • Hash model state                                       │
│  • Seal guideline with signature                          │
│  → Only docHash + CID exposed externally                  │
└───────────────────────┬──────────────────────────────────┘
                        │ Encrypted attributes

┌──────────────────────────────────────────────────────────┐
│  PROVE (ZK Circuit)                                      │
│  Proposition: "This model ID, against this input hash,    │
│  following this guideline, produced this decision"        │
│  → Weights/input values excluded; integrity proven only   │
└───────────────────────┬──────────────────────────────────┘
                        │ ZK proof

┌──────────────────────────────────────────────────────────┐
│  DISCLOSE (Selective Disclosure)                          │
│  Regulator → guideline applied + input attribute category  │
│  Audit firm → decision path + model identifier            │
│  Appellant → final decision only                          │
└───────────────────────┬──────────────────────────────────┘
                        │ Disclosed attributes

┌──────────────────────────────────────────────────────────┐
│  PROVENANCE (On-chain)                                   │
│  docHash / CID / ZK proof / model identifier              │
│  → Immutable even if model/RAG/infra are replaced         │
└──────────────────────────────────────────────────────────┘

Proven Facts

Lemma cryptographically guarantees the following facts in AI audit log proofs:

  • Decision timestamp and decision subject (model identifier and version)
  • Input attribute docHash and CID — identity with originals
  • Hash and issuer signature of the applied guideline / rule set
  • Final decision (approve / deny / score) and decision path attribution
  • Cryptographic identity of the decision, immutable after model updates
  • Verifiable trail for auditors, regulators, and third parties — without data disclosure
Get Started

Ready to prove?

Talk to us about your use case. We respond within one business day.