EU Artificial Intelligence Act — Regulation (EU) 2024/1689
EU regulation that classifies AI systems by risk tier and imposes graduated obligations on providers and deployers. Penalties reach €35 million or 7% of global annual turnover.
Definition
Four risk tiers. Unacceptable (prohibited): social scoring, indiscriminate biometric surveillance, and other uses that violate fundamental rights. High: medical devices, hiring, credit, education evaluation, critical infrastructure, law enforcement — uses with material impact on rights or safety. Limited: chatbots and deepfakes — transparency obligations (disclose to the user). Minimal: everything else, no additional duty.
Phased application. Prohibited practices and AI-literacy obligations apply from February 2025; general-purpose AI (GPAI) model-provider obligations from August 2025; the substantive high-risk and transparency obligations from August 2026. High-risk systems must meet six requirements: (1) lifecycle risk management, (2) training/validation data governance, (3) auditable technical documentation, (4) automated logging, (5) human-oversight mechanisms, (6) accuracy, robustness, and cybersecurity.
GPAI providers must publish technical documentation, instructions for use, copyright-compliance attestations, and a training-data summary. GPAI judged to present systemic risk additionally undergoes model evaluation, adversarial testing, serious-incident reporting, and cybersecurity hardening.
Lemma Oracle compliance path
The high-risk requirements collapse to "remain auditable across the lifecycle." Lemma encodes audit logs, data governance, and human-oversight evidence as docHash + attribute commitments + zero-knowledge proofs. The data itself stays within GDPR and trade-secret boundaries; only attribute proofs cross the wire.
Concretely: (1) collection date, source, and classification of training/validation data are pinned as provenance; (2) input/model/output hashes for each inference enter the audit trail; (3) the timestamp of human approval and the approver's attributes are proven via selective disclosure. Lemma Compliance serves financial high-risk AI; Lemma Civic serves public-sector AI use.
EU AI Act is asking for "a state from which AI trustworthiness can be verified after the fact." Lemma's verifiable AI is the concrete technical realization of that state.