EU AI Act Compliance Guide
This guide helps compliance officers configure Lucid to meet the requirements of the European Union Artificial Intelligence Act (EU AI Act) for high-risk AI systems.
Overview
The EU AI Act is the world's first comprehensive legal framework for artificial intelligence. It establishes requirements for AI systems based on their risk level, with the most stringent requirements applying to "high-risk" AI systems. The regulation requires robust risk management, data governance, transparency, human oversight, accuracy, and cybersecurity.
Lucid helps organizations meet these requirements through:
- Risk management via pre-deployment safety testing and ongoing monitoring
- Robustness and cybersecurity through injection defense and security controls
- Transparency and traceability via comprehensive logging and AI provenance
- Human oversight enablement through explainable AI capabilities
- Content marking for AI-generated synthetic content
Key EU AI Act Articles and Lucid Auditors
| Article | Requirement | Recommended Auditor |
|---|---|---|
| Art. 9 | Risk management system | LLM Judge, LLM Judge (safety benchmarks) |
| Art. 10 | Data and data governance | LLM Judge (data classification), LLM Judge (bias) |
| Art. 12 | Record-keeping (logging) | AI Passport |
| Art. 13 | Transparency and information | LLM Judge (explainability) |
| Art. 14 | Human oversight | LLM Judge, AI Passport |
| Art. 15 | Accuracy, robustness, cybersecurity | LLM Judge Auditor, LLM Judge Auditor, LLM Judge |
| Art. 50 | Synthetic content marking | LLM Judge |
High-Risk AI Classification
Before configuring Lucid, determine if your AI system is classified as high-risk under the EU AI Act. High-risk systems include AI used in:
- Biometric identification
- Critical infrastructure management
- Education and vocational training
- Employment and worker management
- Access to essential services
- Law enforcement
- Migration, asylum, and border control
- Administration of justice
If your system falls into these categories, you must comply with the full requirements of Articles 9-15.
Deploying for EU AI Act Compliance
Quick Start
Deploy an AI environment with the EU AI Act compliance profile:
lucid apply --model llama-3.1-8b --profile eu-ai-act
This enables the following auditors: - LLM Judge - Safety benchmarks and explainability - LLM Judge - Risk management and adversarial testing - LLM Judge - Bias detection - LLM Judge Auditor - Cybersecurity and robustness - LLM Judge Auditor - Model integrity verification - AI Passport - Automatic logging and traceability - LLM Judge - Synthetic content marking - LLM Judge - Data governance
Custom Configuration
For high-risk AI systems requiring comprehensive EU AI Act compliance:
# eu-ai-act-environment.yaml
apiVersion: lucid.io/v1alpha1
kind: LucidEnvironment
metadata:
name: eu-ai-act-compliant
spec:
infrastructure:
provider: gcp
region: europe-west1 # EU region
agents:
- name: high-risk-agent
model:
id: meta-llama/Llama-3.1-8B
gpu:
type: L4
memory: 24GB
auditorChain:
preRequest:
- auditorId: lucid-llm-judge-auditor
name: Cybersecurity (Art. 15.3)
env:
INJECTION_BLOCK_ON_DETECTION: "true"
INJECTION_THRESHOLD: "0.7"
- auditorId: lucid-llm-judge-auditor
name: EU AI Act Guardrails (Art. 5, 9, 10)
postResponse:
- auditorId: lucid-llm-judge-auditor
name: Output Safety & Transparency (Art. 13, 50)
Deploy with:
lucid apply -f eu-ai-act-environment.yaml
Article-by-Article Guidance
Article 9: Risk Management System
Requirement: Establish, implement, document, and maintain a risk management system throughout the AI system's lifecycle, including testing to ensure appropriate and targeted risk management measures.
Lucid Implementation:
- LLM Judge - Adversarial testing
- Pre-deployment safety benchmarks (WMDP, HarmBench)
-
Red team testing to identify vulnerabilities
-
LLM Judge - Safety benchmarks
- Ongoing model evaluation
-
Performance metrics
-
LLM Judge - Bias detection
- Bias detection to identify discrimination risks
env:
SAFETY_BENCHMARKS_ENABLED: "true"
RED_TEAM_TESTING_ENABLED: "true"
WMDP_BENCHMARK: "true"
HARMBENCH_ENABLED: "true"
BIAS_DETECTION_ENABLED: "true"
RISK_ASSESSMENT_INTERVAL: "weekly"
Documentation for Conformity Assessment: The LLM Judge and LLM Judge generate comprehensive reports of safety testing results that can be included in your technical documentation for conformity assessments.
Article 10: Data and Data Governance
Requirement: Training, validation, and testing datasets shall be subject to appropriate data governance practices, including examination for biases.
Lucid Implementation:
- LLM Judge - Data classification and governance
- Identifies data types in AI workflows
- Classifies sensitive information
-
Supports data governance documentation
-
LLM Judge - Bias examination
- Detects bias in model outputs
- Evaluates fairness across demographic groups
env:
DATA_CLASSIFICATION_ENABLED: "true"
BIAS_DETECTION_ENABLED: "true"
FAIRNESS_METRICS: "demographic_parity,equalized_odds,calibration"
Article 12: Record-Keeping (Automatic Logging)
Requirement: High-risk AI systems shall technically allow for automatic recording of events (logs) over the lifetime of the system to ensure traceability.
Lucid Implementation:
- AI Passport - Automatic event logging
- Records all AI system events automatically
- Captures inputs, outputs, and intermediate steps
- Logs are cryptographically signed in TEE for integrity
- Supports the 10-year retention requirement
env:
LOG_RETENTION_DAYS: "3650" # 10 years per EU AI Act
LOG_ALL_EVENTS: "true"
LOG_MODEL_INPUTS: "true"
LOG_MODEL_OUTPUTS: "true"
TRACEABILITY_ENABLED: "true"
LOG_TIMESTAMPS: "true"
LOG_VERSION_INFO: "true"
Accessing Logs for Authorities:
# Export logs for market surveillance authorities
lucid passport export \
--from 2025-01-01 \
--to 2025-12-31 \
--format json \
--detailed > art12_logs.json
# Generate Article 12 compliance report
lucid passport export --compliance-report eu-ai-act-art12 --format pdf
Article 13: Transparency and Provision of Information
Requirement: High-risk AI systems shall be designed to operate with sufficient transparency to enable users to interpret outputs appropriately.
Lucid Implementation:
- LLM Judge - Explainability support
- Documents model capabilities and limitations
- Provides transparency into model behavior
-
Supports user understanding of AI outputs
-
AI Passport - Transparent processing record
- Documents which controls were applied
- Shows the processing pipeline clearly
env:
EXPLAINABILITY_ENABLED: "true"
DOCUMENT_CAPABILITIES: "true"
DOCUMENT_LIMITATIONS: "true"
USER_TRANSPARENCY_MODE: "true"
Article 14: Human Oversight
Requirement: High-risk AI systems shall be designed to allow effective human oversight, including the ability to correctly interpret outputs, understand capabilities and limitations, and intervene.
Lucid Implementation:
- AI Passport - Oversight dashboard
- Provides real-time visibility into AI operations
- Enables monitoring of all AI decisions
-
Supports human intervention capabilities
-
LLM Judge - Interpretability support
- Helps humans understand AI outputs
- Documents model behavior patterns
env:
HUMAN_OVERSIGHT_MODE: "true"
INTERVENTION_ENABLED: "true"
ALERT_ON_HIGH_RISK_DECISIONS: "true"
DASHBOARD_ENABLED: "true"
Observer Dashboard: Access the Lucid Observer dashboard for real-time human oversight at https://observer.lucid.sh.
Article 15: Accuracy, Robustness, and Cybersecurity
Requirement: High-risk AI systems shall achieve appropriate levels of accuracy, robustness, and cybersecurity throughout their lifecycle, and be resilient against attempts to exploit vulnerabilities.
Lucid Implementation:
- LLM Judge Auditor - Cybersecurity resilience (Art. 15.3)
- Defends against prompt injection attacks
- Blocks jailbreak attempts
-
Protects against adversarial manipulation
-
LLM Judge Auditor - Model integrity (Art. 15.2)
-
Verifies model integrity
-
LLM Judge - Accuracy and robustness (Art. 15.1-2)
- Monitors model accuracy metrics
-
Runs adversarial robustness tests
-
All Auditors in TEE - Hardware security
- All processing in hardware-secured enclaves
- Cryptographic attestation of security
env:
# Cybersecurity (Art. 15.3)
INJECTION_BLOCK_ON_DETECTION: "true"
INJECTION_THRESHOLD: "0.7"
JAILBREAK_DETECTION_ENABLED: "true"
# Accuracy (Art. 15.1)
ACCURACY_MONITORING: "true"
PERFORMANCE_METRICS: "true"
# Robustness (Art. 15.2)
ADVERSARIAL_TESTING_ENABLED: "true"
MODEL_INTEGRITY_CHECK: "true"
Article 50: Synthetic Content Marking
Requirement: Providers of AI systems generating synthetic content (audio, image, video, text) shall ensure outputs are marked in a machine-readable format and detectable as artificially generated.
Lucid Implementation:
- LLM Judge - AI content provenance
- Embeds machine-readable watermarks in AI outputs
- Enables detection of AI-generated content
- Provides provenance tracking with TEE attestation
env:
WATERMARK_ENABLED: "true"
WATERMARK_MACHINE_READABLE: "true"
WATERMARK_DETECTABLE: "true"
PROVENANCE_TRACKING: "true"
C2PA_COMPATIBLE: "true" # Content Authenticity Initiative
Verifying Watermarks:
# Check if content is watermarked
lucid watermark verify --content "AI generated text here"
# Export provenance certificate
lucid passport show <passport-id> --provenance
Evidence for Conformity Assessment
Required Technical Documentation
The EU AI Act requires extensive technical documentation. Lucid provides:
- Risk Management Documentation (Art. 9)
- Safety benchmark results
- Red team testing reports
-
Bias evaluation results
-
Data Governance Records (Art. 10)
- Data classification logs
-
Bias examination records
-
Automatic Logging (Art. 12)
- Complete event logs
- Traceability records
-
10-year retention capability
-
Transparency Documentation (Art. 13)
- Model capability documentation
- Limitation disclosures
-
Processing transparency records
-
Cybersecurity Evidence (Art. 15)
- Security control attestations
- Blocked attack records
- Hardware attestation certificates
Generating Conformity Assessment Evidence
# Generate comprehensive EU AI Act documentation package
lucid passport export --compliance-report eu-ai-act --format pdf > eu_ai_act_evidence.pdf
# Export Article 12 automatic logs
lucid passport export --art12-logs --from 2025-01-01 > art12_logs.json
# Generate risk management report (Art. 9)
lucid eval report --risk-management > risk_management.pdf
# Export watermark provenance records (Art. 50)
lucid passport export --provenance --from 2025-01-01 > provenance_records.json
For Notified Bodies
When undergoing conformity assessment by a notified body, provide:
- AI Passports - Cryptographic proof of control enforcement
- Observability logs - Article 12 compliant event records
- Eval reports - Safety benchmark and risk assessment results
- Configuration documentation - Technical implementation details
- TEE attestations - Hardware-backed security evidence
Post-Market Monitoring
The EU AI Act requires ongoing monitoring after deployment. Lucid supports this through:
- Continuous monitoring via AI Passport
- Ongoing safety evaluation via LLM Judge
- Incident detection and reporting capabilities
# Set up continuous monitoring
lucid monitor --agent high-risk-agent --alerts
# Generate post-market monitoring report
lucid passport export --post-market-report --period monthly
AI Office Reporting
For serious incidents or market surveillance authority requests, export comprehensive evidence:
# Generate incident report
lucid incident report --incident-id INC-001 --format pdf
# Export for market surveillance authority
lucid passport export \
--authority-request \
--request-id AUTH-2024-001 \
--format json
General-Purpose AI (GPAI) Considerations
If you are deploying foundation models or general-purpose AI with systemic risk, additional requirements apply:
env:
# GPAI with systemic risk (Art. 52a)
GPAI_SYSTEMIC_RISK_MODE: "true"
MODEL_EVALUATION_COMPREHENSIVE: "true"
RED_TEAM_ADVERSARIAL: "true"
INCIDENT_REPORTING_ENABLED: "true"
Best Practices for EU AI Act Compliance
- Classify your AI system - Determine if it's high-risk before configuring
- Enable comprehensive logging - Article 12 requires automatic event recording
- Deploy in EU regions - Ensure data residency compliance
- Configure watermarking - Required for AI-generated content
- Retain logs for 10 years - EU AI Act retention requirement
- Conduct regular risk assessments - Use LLM Judge safety benchmarks
- Prepare conformity documentation - Maintain technical documentation package
- Enable human oversight - Ensure intervention capabilities exist
Timeline Considerations
The EU AI Act has phased implementation: - February 2025: Prohibited AI practices take effect - August 2025: GPAI requirements take effect - August 2026: High-risk AI requirements take effect
Configure Lucid now to ensure compliance by the relevant deadlines.
Related Resources
- Auditor Catalog - Detailed EU AI Act control mappings
- Policy as Code - Custom compliance rules
- GDPR Compliance Guide - Complementary EU data protection requirements
- SOC 2 Compliance Guide - Complementary service organization controls