Your First Cedar Policy
This tutorial walks you through writing your first Cedar policy for a Lucid agent. By the end, you will have a working policy that blocks prompt injection and toxic content based on claims from ClaimsAuditors.
Prerequisites
This tutorial assumes you have a running agent with at least one ClaimsAuditor (e.g., the LLM Judge Auditor). See Your First Auditor if you need to set one up.
What You Will Build
A Cedar policy that:
1. Blocks prompt injection attempts (when injection_risk > 0.7)
2. Blocks toxic content above a threshold (when toxic_content > 0.7)
3. Warns on moderate toxicity (when toxic_content is between 0.4 and 0.7)
4. Allows everything else
Understanding the Flow
Before writing policy, understand the data flow:
User Request
-> Gateway receives request
-> Gateway calls ClaimsAuditors in parallel
-> LLM Judge Auditor returns:
injection_risk = 0.05
toxic_content = 0.12
-> Gateway builds ClaimsContext from all claims
-> Gateway evaluates YOUR Cedar policy against the context
-> Cedar returns allow/deny
-> Gateway forwards or blocks the request
Your policy operates on the ClaimsContext -- the collection of all claims from all auditors.
Step 1: Create the Policy File
Create a file named policy.cedar:
// ===========================================
// My First Cedar Policy
// ===========================================
// Rule 1: Block prompt injection attempts
forbid(principal, action == Action::"invoke", resource)
when { context.claims.injection_risk > 0.7 };
// Rule 2: Block toxic content
forbid(principal, action == Action::"invoke", resource)
when { context.claims.toxic_content > 0.7 };
// Rule 3: Warn on moderate toxicity (flag but don't block)
@annotation("decision", "warn")
forbid(principal, action == Action::"invoke", resource)
when {
context.claims.toxic_content > 0.4 &&
context.claims.toxic_content <= 0.7
};
// Rule 4: Allow everything else
permit(principal, action == Action::"invoke", resource);
Let's break this down:
forbid(principal, action == Action::"invoke", resource): This rule applies when any principal tries to invoke any agent.when { context.claims.injection_risk > 0.7 }: The condition checks the claims produced by the LLM Judge Auditor.@annotation("decision", "warn"): This changes the default behavior from "block" to "warn" -- the request proceeds but is flagged.permit(...): The default allow rule at the end lets everything through that was not explicitly forbidden.
Step 2: Validate the Policy
Use the CLI to check for syntax errors:
[+] Entity types match Lucid schema
[+] Claim references found in auditor vocabulary:
- injection_risk (LLM Judge Auditor)
- toxic_content (LLM Judge Auditor)
[*] Policy is valid.
Step 3: Test with Sample Claims
Create a test file test-clean.json for a clean request:
{
"principal": "User::\"user-123\"",
"action": "Action::\"invoke\"",
"resource": "Agent::\"my-agent\"",
"context": {
"phase": "request",
"claims": {
"injection_risk": 0.05,
"toxic_content": 0.1
}
}
}
Create test-injection.json for an injection attempt:
{
"principal": "User::\"user-123\"",
"action": "Action::\"invoke\"",
"resource": "Agent::\"my-agent\"",
"context": {
"phase": "request",
"claims": {
"injection_risk": 0.95,
"toxic_content": 0.2
}
}
}
Test both scenarios:
Matching rules: permit (default allow)
lucid policy test policy.cedar --claims-file test-injection.json --expect deny[+] Decision: DENY (as expected)
Matching rules: forbid (injection_risk > 0.7)
Step 4: Deploy the Policy
Push the policy to your agent:
[+] Policy pushed to agent: my-agent
[+] Gateway will pick up changes within 60 seconds
Step 5: Verify in the Observer
Open the Observer to see your policy in action:
In the Observer, navigate to the Policy tab to see: - Your Cedar policy displayed with syntax highlighting - Real-time evaluation results for incoming requests - Claims referenced by each rule - Decision history (allow/deny/warn)
Step 6: Iterate
Adding PII Protection
Extend your policy to handle PII claims from the PII Compliance Auditor:
// Block if PII detected and agent is not authorized
forbid(principal, action == Action::"invoke", resource)
when { context.claims.pii_count > 0 }
unless { resource.has_pii_access == true };
Adding Response-Phase Rules
Add rules that check the model's response:
// Block hallucinated responses
forbid(principal, action == Action::"invoke", resource)
when {
context.phase == "response" &&
context.claims.hallucination_score > 0.7
};
Adding Compliance Annotations
Map your rules to regulatory controls:
@annotation("compliance_framework", "OWASP LLM Top 10")
@annotation("control_id", "LLM-01")
@annotation("description", "Prompt injection prevention")
forbid(principal, action == Action::"invoke", resource)
when { context.claims.injection_risk > 0.7 };
Using the Policy Editor Instead
If you prefer a visual approach, use the Observer's policy editor:
- Open
lucid open my-agent --config - Navigate to the Policy section
- Use the IFTTT tab to add rules by clicking:
- IF
injection_riskgreater than0.7THEN Block - IF
toxic_contentgreater than0.7THEN Block - The editor generates the same Cedar policy you wrote by hand
See the Policy Editor Guide for details on all three editor modes.
Complete Policy
Here is the final policy with all additions:
// ===========================================
// My Agent Security Policy
// ===========================================
// --- Request Phase ---
// Block injection attempts
@annotation("compliance_framework", "OWASP LLM Top 10")
@annotation("control_id", "LLM-01")
forbid(principal, action == Action::"invoke", resource)
when { context.claims.injection_risk > 0.7 };
// Block toxic content
forbid(principal, action == Action::"invoke", resource)
when { context.claims.toxic_content > 0.7 };
// Warn on moderate toxicity
@annotation("decision", "warn")
forbid(principal, action == Action::"invoke", resource)
when {
context.claims.toxic_content > 0.4 &&
context.claims.toxic_content <= 0.7
};
// Block PII unless authorized
@annotation("compliance_framework", "GDPR")
@annotation("control_id", "Art.5(1)(c)")
forbid(principal, action == Action::"invoke", resource)
when { context.claims.pii_count > 0 }
unless { resource.has_pii_access == true };
// --- Response Phase ---
// Block hallucinations
forbid(principal, action == Action::"invoke", resource)
when {
context.phase == "response" &&
context.claims.hallucination_score > 0.7
};
// --- Default ---
permit(principal, action == Action::"invoke", resource);
Next Steps
- Cedar Policies Guide - Advanced Cedar patterns, scoping, and testing
- Policy Editor Guide - Using the three-tab visual editor
- Cedar Schema Reference - Full entity, action, and context schema
- Claim Vocabulary - All claim names available from built-in auditors
- Auditor Development Guide - Build custom ClaimsAuditors for your own claims