Skip to content

Architecture Overview

Lucid utilizes a multi-party chain of custody modeled after the IETF RATS (Remote ATtestation procedureS) Architecture (RFC 9334). It provides a framework for verifiable AI execution using hardware-based roots of trust.

Architecture Components

flowchart TB
    subgraph Customer["Customer Cluster"]
        direction TB

        Operator["Lucid Operator"]

        subgraph TEE["Trusted Execution Environment (TEE)"]
            direction TB
            AI["AI Workload<br/>(LLM / Model)"]

            subgraph Auditors["ClaimsAuditors (Observe)"]
                direction LR
                A1["Guardrails"]
                A2["PII"]
                A3["Sovereignty"]
                A4["Eval"]
            end

            subgraph VER["Verifier (PEP)"]
                direction TB
                G1["Collect Claims"]
                G2["Cedar Policy<br/>Evaluation"]
                G3["Evidence Bundle"]
                G1 --> G2 --> G3
            end

            AI <--> Auditors
            Auditors -->|"claims"| VER
        end

        subgraph Attestation["Attestation Layer"]
            CoCo["CoCo AA/AS<br/>(or Mock)"]
        end

        VER --> |"Signed<br/>Evidence"| CoCo
        Operator -.-> |"Injects Sidecars"| TEE
    end

    subgraph SaaS["Lucid SaaS Platform"]
        direction TB
        Verifier["Verifier<br/>(FastAPI)"]
        Passport["AI Passport"]
        Observer["Observer UI<br/>(Trust Dashboard)"]

        Verifier --> |"Issues"| Passport
        Passport --> Observer
    end

    CoCo --> |"Evidence"| Verifier

The Verifier as Policy Enforcement Point (PEP)

The Verifier is the central enforcement component. It does not contain safety logic itself -- it orchestrates the flow inline:

  1. Collects claims from all ClaimsAuditors (each auditor observes one aspect of the traffic)
  2. Evaluates one Cedar policy against the combined claims context
  3. Produces one Evidence bundle containing all claims, the Cedar decision, and a TEE signature
sequenceDiagram
    participant User
    participant Verifier as Verifier (PEP)
    participant A1 as LLM Judge Auditor
    participant A2 as PII Auditor
    participant A3 as Sovereignty Auditor
    participant Cedar as Cedar Engine
    participant Model as AI Model
    participant Nanobot as Nanobot Attester

    User->>Verifier: Request
    par Claims Collection
        Verifier->>A1: /claims
        A1-->>Verifier: [injection_risk=0.05, toxic_content=0.1]
        Verifier->>A2: /claims
        A2-->>Verifier: [pii_types=[], pii_risk_score=0.0]
        Verifier->>A3: /claims
        A3-->>Verifier: [detected_regions=[US], location_confidence=0.95]
    end
    Verifier->>Cedar: Evaluate policy with ClaimsContext
    Cedar-->>Verifier: ALLOW
    Verifier->>Model: Forward request
    Model-->>Verifier: Response
    par Response Claims
        Verifier->>A1: /claims (response phase)
        A1-->>Verifier: [toxic_content=0.05]
    end
    Verifier->>Cedar: Evaluate policy with response claims
    Cedar-->>Verifier: ALLOW
    Verifier->>Nanobot: Produce signed Evidence bundle (ar_references previous AR)
    Nanobot-->>Verifier: AttestationResult (ar_hash, chain_depth)
    Verifier-->>User: Response + Evidence (chained via ar_references)

Serverless Architecture

In serverless mode, Lucid manages shared TEE resource pools. Customers get instant deployment without provisioning infrastructure, while maintaining the same hardware-backed security guarantees.

See the Deployment Modes guide for serverless configuration and the TEE concepts page for attestation details.


Attestation Chain (Tamper-Evident Audit Trail)

The Attestation Chain links AttestationResults across phases via ar_references, producing a causally ordered, tamper-evident audit trail per RFC 9334 §3.2. The Nanobot acts as an Attester, producing signed Evidence bundles for each request/response cycle that reference prior AttestationResults by ar_hash.

How It Works

After Cedar evaluation, the Nanobot produces a signed Evidence bundle containing Claim objects, the Cedar decision, and metadata. Each AttestationResult carries an ar_hash (content hash of the result), an ar_references list (hashes of prior ARs it depends on), and a chain_depth counter. This creates a causal chain where tampering with any entry invalidates all downstream references.

Client --> Verifier (inline enforcement)
               |
               |-- Collects claims from auditors, runs Cedar policy
               |                     |
               |               sets metadata:
               |                 x-lucid-claims-hash
               |                 x-lucid-cedar-decision
               |                 x-lucid-cedar-policy-hash
               |<--------------------+
               |
               |-- [routes to Nanobot, gets response]
               |
               |-- Nanobot Attester produces signed Evidence bundle
               |     - bundles Claim objects + Cedar decision
               |     - sets ar_references to previous AR hashes
               |     - computes ar_hash, increments chain_depth
               |
               +-- returns response + Evidence (chained via ar_references)

Each Evidence bundle covers the full audit pipeline: request claims, response claims, Cedar decision, policy hash, and auditor IDs. The ar_references field links each AR to its causal predecessors, creating a DAG where tampering with any entry breaks all downstream references.

WASM Security Model

The attestation chain core is implemented as a WASM module (lucid-wasm-receipt) with three concentric isolation layers:

  1. TEE (AMD SEV-SNP): Protects from host OS, hypervisor, physical access
  2. Container (CoCo/kata-cc): Protects from other pods, K8s control plane
  3. WASM sandbox: Protects from the Nanobot itself -- the Ed25519 signing key never leaves the sandbox. No filesystem, no network, no syscalls.

Even if the Nanobot service code is fully compromised, the attacker cannot extract the signing key, modify existing Evidence bundles, break ar_references linkage, or reset the chain depth counter.

Chain Verification

The Verifier exposes GET /v1/attestation-chain/{ar_hash} to validate chain integrity: ar_references linkage, signature validity, chain_depth monotonicity, and chain splices (new keys from pod restarts).

Chain Splices

When a pod restarts (new TEE, new key), the new chain's first AR references the last ar_hash from the old chain via ar_references, signed by the new key with a new attestation binding. The verification endpoint follows splices and validates both attestation reports.


The Verification Flow

The lifecycle of a secure AI request follows these stages:

1. Workload Provisioning (The Attester)

The Lucid Operator identifies a verifiable workload. It provisions a TEE environment and injects the ClaimsAuditors as sidecars.

2. Policy Definition (Cedar)

Administrators define enforcement rules using Cedar policies. Cedar policies reference claim names from the auditor vocabulary and define allow/deny rules. Policies are scoped: org -> workspace -> agent, with deny-overrides.

3. Claims Collection

As a request flows through the Verifier, each ClaimsAuditor produces claims (observations) about the traffic. Claims are typed, named, and timestamped.

4. Cedar Policy Evaluation

The Verifier's Cedar engine evaluates the unified policy against the collected claims inline. One policy, one decision per request.

5. Evidence Creation

The Verifier bundles all claims, the Cedar decision, and metadata into a signed Evidence container. The TEE Attestation Agent adds a hardware signature.

6. Verification and Passport

The Lucid Verifier appraises the Evidence against hardware quotes. Verified results are stored as an AI Passport and surfaced in the Observer dashboard.

Operational Modes: Mock vs. Production

Lucid supports two modes to balance development speed with production security.

Service Local (Mock Mode) Production (CoCo/TEE)
Hardware Standard CPU Intel SGX, AMD SEV, AWS Nitro
Signing Mock AA (ECDSA) CoCo AA (Hardware TEE Quote)
Verification Mock AS CoCo AS (Hardware Trust Root)
Cedar Engine Same Cedar engine Same Cedar engine
Security Logic Simulation Hardware-Enforced

Both modes use identical API contracts, ensuring that code developed locally functions unchanged in production TEE environments.

Code Portability

You can develop 100% of your ClaimsAuditors and Cedar policies locally using Mock Mode. The same code and policies will function identically when deployed to a hardware-secured cluster.

System Sequence Diagram

sequenceDiagram
    participant User
    participant CLI as Lucid CLI
    participant K8s as K8s (Operator)
    participant TEE as TEE (Enclave)
    participant Verifier as Verifier (PEP + Appraisal)
    participant Cedar as Cedar Engine

    User->>CLI: lucid apply
    CLI->>K8s: Provision Attester (TEE)
    K8s->>TEE: Inject Auditors + Cedar Policy
    User->>Verifier: Request
    Verifier->>TEE: Collect Claims from Auditors
    Verifier->>Cedar: Evaluate Cedar Policy
    Cedar-->>Verifier: Decision (allow/deny)
    Verifier->>Verifier: Produce Evidence
    Verifier-->>User: Result + AI Passport

Hardware Endorser Devices

For high-assurance deployments, Lucid supports hardware endorser devices that provide additional cryptographic attestation beyond standard TEE quotes:

Device Role Signal Provided
DC-SCM Hardware root of trust Power telemetry, secure boot, tamper detection
FlexNIC Network monitoring Collective detection, flow patterns
GPU CC Confidential computing Memory encryption, GPU attestation
TPM 2.0 Platform integrity PCR measurements, measured boot

Multi-Signal Verification

When endorser devices are present, the system correlates four independent signals for workload classification:

  1. Kernel structure (Inspector TEE): Training = backward pass + optimizer
  2. Network patterns (FlexNIC): Training = all-reduce collectives
  3. Power profile (DC-SCM): Training = sustained high utilization
  4. Memory behavior (Inspector TEE): Training = stores activations

All signals must be mutually consistent for high-confidence classification. This creates defense-in-depth: an attacker cannot forge one signal without creating detectable inconsistencies in the others.

Deployment Type

All deployments use the model deployment type, which provisions a model with auditors and inline Cedar enforcement. Each deployment carries its own TEE attestation, ClaimsAuditors, and Cedar policy evaluated by the Verifier. The deployment type is set via the deployment_type field in the LucidEnvironment spec.

flowchart LR
    subgraph ModelDeployment["model deployment"]
        HM["Model"] --> HAud["Auditors + Verifier"]
    end

Workflows: Composing Deployments

Workflows are a composition layer that wires model deployments together into a single logical application. A workflow is a JSON graph where nodes reference deployments and edges define intent-based routing conditions.

The key design principle: the LLM is the router. Workflows compile down to an orchestrator system prompt and a set of MCP tool registrations. There is no runtime engine, no LangGraph, no Temporal -- the orchestrator LLM reads the system prompt and uses MCP tools to route requests to the appropriate downstream agents.

See the Workflows concept page for full documentation.

MCP: Inter-Service Communication

Every Lucid service (auditors, verifier) now exposes MCP (Model Context Protocol) tools via a /mcp endpoint. Services publish tool metadata at /.well-known/mcp for discovery.

MCP serves two roles in the architecture:

  1. Workflow routing -- The orchestrator LLM calls MCP tools to dispatch requests to downstream deployments
  2. Service integration -- External systems access Lucid capabilities (PII scanning, guardrails checks, deployment management) through a unified tool interface

The Verifier federates tool access across all services, providing a single entry point with OAuth 2.1 authentication for external clients and mTLS for internal service-to-service calls.

See the MCP concept page for details.

Verifiable Agent Pods (VAP)

VAP extends the architecture to treat AI agents as full colleagues rather than APIs. The core principle: give an agent an email address and register an account -- the same way you would onboard a new employee. Agents are defined declaratively in LucidWorkspace (multi-agent) or LucidAgent (standalone) YAML with sub-specs for identity, abilities, environment, actions, and auditor settings. The agent inherits sharing models from existing tools (Google Docs, Slack, GitHub) for free, while a governance layer underneath enforces machine-level guardrails.

The Agent Identity Stack

VAP adds a five-layer identity stack that sits alongside the existing TEE, Cedar, and Evidence systems:

block-beta
    columns 1
    block:UX["UX Layer"]
        UX1["'Share with agent' / @mention / Approve"]
    end
    block:GOV["Governance Layer"]
        GOV1["Owner assignment (Share dialog)"]
        GOV2["Auditor settings (org → workspace → agent)"]
        GOV3["Approval flows (first Owner to respond)"]
    end
    block:CRED["Credential Layer"]
        CRED1["OAuth tokens, dynamic secrets, browser fill"]
        CRED2["SPIFFE-SVID auth, RFC 8693 token exchange"]
        CRED3["Auto-rotation, JIT provisioning"]
    end
    block:AUTHZ["Authorization Layer"]
        AUTHZ1["Cedar policies (unified with auditor Cedar)"]
        AUTHZ2["Verifier (inline Cedar enforcement)"]
        AUTHZ3["OBO delegation grants, Access Manifest (6-domain, deny-overrides)"]
    end
    block:ID["Identity Layer"]
        ID1["Agent email + handle + passport"]
        ID2["SPIFFE workload identity (spiffe://lucid.ai/agent/{id})"]
        ID3["Linked to owners via AgentAccess table"]
    end

    style UX fill:#e1f5fe
    style GOV fill:#fff3e0
    style CRED fill:#f3e5f5
    style AUTHZ fill:#e8f5e9
    style ID fill:#fce4ec

How VAP Connects to Existing Systems

Existing System VAP Role
Cedar Pipeline Agent identity nodes become Cedar principals
Evidence / Passport Agent profile IS the passport -- rendered from attested evidence
ClaimsAuditors Each auditor produces claims; Verifier evaluates Cedar policy inline
Operator Agents get same CoCo AA + mTLS sidecar injection as auditors
SPIFFE/SPIRE Workload identity -- each agent gets spiffe://lucid.ai/agent/{agent_id}

See the VAP guide for the full architecture walkthrough, the Glossary for VAP-specific terminology, and the Deployment Guide for VAP container deployment flows.

What's Next?

  • Deep dive into ClaimsAuditors to understand observation vs enforcement.
  • Check out the First Auditor Guide to build your first ClaimsAuditor.
  • Write your First Cedar Policy to define enforcement rules.
  • Learn about Workflows for composing deployments.
  • Explore MCP for inter-service communication.
  • Read the VAP guide for agent identity, app catalog, and console features.
  • Understand the Attestation Chain for tamper-evident audit trails.
  • See the Glossary for definitions of security terms.