Chapter 04 of 08

The Regulatory Landscape

Six regulatory frameworks mapped to agent-specific controls. The tables your CISO hands to the auditor.

AI agents aren't exempt from existing regulation. They're a new class of actor that triggers existing requirements — often requirements that were designed for humans or traditional software. This chapter maps each major regulatory framework to the Five Pillars, with specific articles, agent-relevant obligations, and the governance controls that satisfy them.

How to use this chapter

If you're a CISO: Print the control mapping table for your relevant frameworks. Hand it to your auditor during the next assessment. It maps each regulatory obligation to a specific, implementable governance control.

If you're a CIO: Use the summary table at the end to scope your governance program. Not every framework applies to every organization — but the ones that do apply are non-negotiable.


EU AI Act

Regulation (EU) 2024/1689

Full enforcement: 2 August 2026 (5 months away)

The EU AI Act is the world's first comprehensive AI regulation. It entered into force on 1 August 2024. Prohibited practices and AI literacy obligations applied from 2 February 2025. High-risk AI system rules become fully applicable on 2 August 2026. Compliance experts estimate 32-56 weeks to achieve compliance — if you haven't started, you are already behind the curve.

Most enterprise AI agent deployments in regulated industries trigger high-risk classification under Article 6 and Annex III — particularly agents involved in employment decisions, credit scoring, critical infrastructure, or law enforcement.

ArticleRequirementAgent-Specific ObligationPillarControlLevel
Art. 9 Risk management system Continuous risk assessment for AI agent operations. Identify and mitigate risks throughout agent lifecycle. Verification Policy-as-Code analysis detects contradictions, privilege escalation paths, fail-open gaps at deploy time. Runtime verification checks cross-step compliance. Automated
Art. 10 Data governance Training data quality. Agents must not perpetuate bias or use inappropriate data. Audit LLM Gateway pipeline: PII redaction before model, content moderation on output. Data flow logged in audit trail. Automated
Art. 12 Record-keeping Automatic logging of agent actions with sufficient detail for post-incident analysis. Audit Hash-chained audit logs. Every tool call logged with actor, target, action, result, cost. HMAC integrity. 7+ year retention. SIEM export. Automated
Art. 13 Transparency Users must know they're interacting with AI. Agent capabilities and limitations must be documented. Identity Agent identity visible in all interactions. Verifiable Credentials carry capability declarations. System prompts visible (no black box). Semi-auto
Art. 14 Human oversight Humans must be able to monitor, interpret, and override AI agent actions. Prevent over-reliance. Authorization Kill switch hierarchy (agent → team → tenant). Approval workflows for sensitive operations. Human-in-the-loop enforcement. Progressive autonomy levels (reactive → proactive → autonomous). Automated
Art. 15 Accuracy, robustness, cybersecurity AI systems must be resilient to adversarial attacks. Output must be accurate and reproducible. Verification Multi-LLM cross-checking (PVP). Prompt injection detection in Gateway. Output validation. Content moderation. Automated
Art. 99 Penalties Prohibited practices: up to €40M or 7% of global turnover. High-risk non-compliance: up to €20M or 4%. Misinformation to authorities: up to €10M or 1%.

GDPR

Regulation (EU) 2016/679

In force since 25 May 2018. Applies to all AI systems processing EU personal data.

GDPR doesn't mention AI agents specifically — but every agent that processes personal data of EU residents is subject to it. The key challenge: agents process data at machine speed across tools, making traditional consent and purpose limitation controls insufficient without automation.

ArticleRequirementAgent-Specific ObligationPillarControlLevel
Art. 5(1)(b) Purpose limitation Agent must only access data for the purpose it was collected. Cross-purpose usage by agents must be prevented. Authorization Per-tool authorization policies restrict which data each agent can access. Scope bound to workspace/team. Default deny. Automated
Art. 5(1)(c) Data minimization Agent should access only the minimum data necessary for its task. Authorization Fine-grained tool policies. Agent authorized for "CRM read contacts" but not "CRM read all." Resource-level scoping. Automated
Art. 5(1)(f) Integrity and confidentiality Personal data processed by agents must be protected against unauthorized access and accidental loss. Audit + Identity Envelope encryption (AES-256-GCM). BYOK mandatory. Per-agent cryptographic identity. TLS 1.3 in transit. BYOS for data residency. Automated
Art. 22 Automated decision-making Data subjects have the right not to be subject to automated decisions with legal effects. Agents making such decisions need human review. Authorization Approval workflows require human sign-off for high-stakes agent actions. Four-eyes principle enforcement via governance packs. Semi-auto
Art. 25 Data protection by design Agent platform must implement privacy controls as architectural defaults, not afterthoughts. All PII redaction in LLM Gateway (before data reaches LLM). Encryption auto-triggered by GDPR governance pack. DLP scanning on tool inputs/outputs. Automated
Art. 30 Records of processing Maintain records of all processing activities by AI agents. Audit Comprehensive audit trail. Every tool call = a processing activity record. Searchable, exportable, SIEM-integrated. Automated
Art. 32 Security of processing Appropriate technical measures: encryption, access control, regular testing. Identity + Authorization Cryptographic agent identity. OpenFGA authorization. Envelope encryption. Key rotation. Access reviews. Automated
Art. 33 Breach notification (72h) Detect and report breaches involving agent-processed data within 72 hours. Audit SIEM real-time export. Kill switch for immediate containment. Incident management with SLAs. Audit hash chain detects tampering. Semi-auto
Art. 35 Data Protection Impact Assessment DPIA required for automated processing at scale. Verification Compliance reports auto-generated. Evidence auto-collected. GDPR governance pack produces framework-specific assessment data. Semi-auto

HIPAA

45 CFR Parts 160, 164

Proposed Security Rule amendments (Jan 2025) make previously optional safeguards mandatory by 2026.

Any AI agent that accesses, processes, or transmits Protected Health Information (PHI) is subject to HIPAA. The proposed 2025 Security Rule amendments are strengthening requirements around encryption, audit logging, and access controls — with specific attention to AI systems. By 2026, healthcare organizations must maintain a detailed inventory of AI tools and comprehensive audit logs for any AI interactions involving PHI.

SectionSafeguardAgent ObligationPillarControlLevel
§164.308(a)(1) Security management process Risk analysis and management for all AI agent systems accessing PHI. Verification Policy analysis at deploy time. Continuous compliance monitoring via governance packs. Risk scoring. Semi-auto
§164.308(a)(3) Workforce security Ensure only authorized agents access PHI. Terminate access when no longer needed. Authorization Per-agent authorization. Lifecycle management (create, deploy, pause, retire). Access reviews. Revocation is instant. Automated
§164.308(a)(4) Information access management Policies for granting agent access to PHI. Minimum necessary standard. Authorization Fine-grained tool policies. "Agent X can read patient records but not write." Resource-level scoping. Default deny. Automated
§164.312(a)(1) Access control (Technical) Unique agent identification. Emergency access procedures. Automatic session timeout. Identity Per-agent SPIFFE IDs. JWT-SVIDs with 1-hour TTL. Session management. Kill switch for emergency access revocation. Automated
§164.312(b) Audit controls Record and examine all agent activity involving PHI. Audit Every tool call logged. LLM Gateway pipeline audit. Hash-chained integrity. Separate audit database support. Automated
§164.312(c)(1) Integrity Protect PHI from improper alteration or destruction by agents. Audit + Authorization HMAC integrity on audit logs. Write authorization required (read-only by default). Content validation in Gateway. Automated
§164.312(d) Person or entity authentication Verify that an agent is who it claims to be before granting PHI access. Identity Cryptographic identity verification. SPIFFE trust bundles for cross-org. OAuth 2.0 Token Exchange for delegation. Automated
§164.312(e)(1) Transmission security Encrypt PHI in transit when processed by agents. Federation TLS 1.3 for all API traffic. AES-256-GCM for cross-org federation. Envelope encryption for data at rest. Automated

SOX

Sarbanes-Oxley Act, Section 404

Applies to all publicly traded companies. Continuous compliance required.

SOX Section 404 requires management to assess the effectiveness of internal controls over financial reporting. When AI agents are involved in financial processes — invoice processing, reconciliation, expense approval, financial analysis — they become part of the internal control environment. Auditors need to verify that the agent's control trail is as auditable as a human's.

ControlRequirementAgent ObligationPillarControlLevel
COSO: Control Environment Tone at the top; ethical values Agent behavior governed by explicit policies, not implicit LLM "values." Authorization Cascading governance policies (Platform → Tenant → App → Team → Agent). Policies are code, not documents. Automated
COSO: Risk Assessment Identify and manage risks to financial reporting Risk assessment for agent actions that affect financial data. Verification Policy analysis detects risks at deploy time. Budget constraints enforced per-agent. Cost tracking per-call. Automated
Separation of Duties No single person/system controls all aspects of a financial transaction Agent that proposes a payment must not be the same agent that approves it. Verification + Authorization Cross-step verification detects SoD violations. Four-eyes governance module enforces dual approval. Execution certificates prove compliance. Automated
Audit Trail Complete, immutable record of financial transactions Every agent action on financial data must be logged with full context. Audit Hash-chained, HMAC-verified audit logs. Pipeline hash proves no stage was bypassed. 7+ year retention. Separate audit DB. Automated
Access Controls Restrict access to financial systems Agents accessing financial tools must have explicit, scoped authorization. Authorization Per-tool policies for financial tools (e.g., "invoice_approve" requires approval workflow). SOX governance pack auto-applies rules. Automated

DORA

Regulation (EU) 2022/2554

Applied from 17 January 2025. Affects all EU financial entities.

DORA (Digital Operational Resilience Act) requires financial entities to ensure their ICT systems — including AI agents — are resilient, recoverable, and continuously monitored. AI agents that participate in financial operations are ICT services subject to DORA's full scope.

ArticleRequirementAgent ObligationPillarControlLevel
Art. 5-6 ICT risk management framework Identify, classify, and manage risks from AI agent operations. Verification Policy analysis at deploy time. Governance cascade ensures consistent risk management from org to agent level. Semi-auto
Art. 9 Protection and prevention Protect ICT systems from AI agent misuse or compromise. Authorization Default-deny authorization. Prompt injection detection. Content moderation. PII redaction. Rate limiting. Automated
Art. 10 Detection Detect anomalous agent behavior and security incidents. Audit Real-time SIEM export. Anomaly detection via audit log analysis. Kill switch triggers on threshold breaches. Automated
Art. 11 Response and recovery Rapid containment and recovery from AI agent incidents. Authorization Kill switch hierarchy (seconds, not hours). Agent pause/suspend/emergency_stop. Incident management with SLAs (15min P1). Automated
Art. 28-30 Third-party ICT risk Manage risks from LLM providers, tool services, and federated agents. Federation BYOK mandatory (your keys, not vendor's). Circuit breaker on external services. Trust relationships are explicit and revocable. Health monitoring. Semi-auto

NIS2

Directive (EU) 2022/2555

Member state transposition deadline: 17 October 2024. Enforcement ongoing.

NIS2 applies to essential and important entities across 18 sectors. AI agents operating within these entities' infrastructure are subject to NIS2's cybersecurity risk management requirements. The directive emphasizes supply chain security — relevant when agents use external LLM APIs or federate with other organizations' agents.

ArticleRequirementAgent ObligationPillarControlLevel
Art. 21(2)(a) Risk analysis and security policies Security policies must cover AI agent operations. All Governance packs codify security policies per framework. Cascading policies enforce at every level. Automated
Art. 21(2)(b) Incident handling Detect, respond to, and recover from AI agent security incidents. Audit + Authorization Kill switch for containment. SIEM export for detection. Incident management workflows. Hash-chained evidence. Automated
Art. 21(2)(d) Supply chain security Manage risks from LLM providers, tool integrations, and federated agents. Federation BYOK (own keys). BYOS (own storage). Trust bundle verification for federation. Circuit breaker health monitoring. Vendor independence (6 LLM providers). Semi-auto
Art. 21(2)(i) Human resources security Access control policies for AI agents alongside human workforce. Identity + Authorization Agents as first-class identities in IAM. SPIFFE IDs. Access reviews include agents. SCIM provisioning for user lifecycle. Automated
Art. 23 Reporting obligations Report significant incidents within 24h (early warning) / 72h (full notification). Audit Real-time SIEM export enables immediate detection. Compliance reports auto-generated. Incident timeline reconstructable from audit trail. Semi-auto

Cross-Framework Summary

The table below maps each Pillar to its regulatory justification across all six frameworks. Use this to prioritize: if a Pillar is required by every framework your organization is subject to, it's non-negotiable.

PILLAR × REGULATION COVERAGE HEATMAP EU AI Act GDPR HIPAA SOX DORA NIS2 Identity Art. 13 Art. 32 312(a),(d) ITGC Art. 9 21(2)(i) Authz Art. 14 Art. 5,22 308(a) SoD Art. 9,11 21(2)(a) Verification Art. 9,15 Art. 35 308(a)(1) Sec. 404 Art. 5-6 21(2)(a) Audit Art. 10,12 Art. 30,33 312(b),(c) Sec. 802 Art. 10 Art. 23 Federation Art. 15 Art. 5(1)(f) 312(e) Art. 28-30 21(2)(d) Brighter = stronger regulatory requirement. Every pillar is required by 5+ of 6 frameworks (except Federation/SOX).
PillarEU AI ActGDPRHIPAASOXDORANIS2
Identity Art. 13 (transparency) Art. 32 (security) §164.312(a)(1), (d) Access controls Art. 9 Art. 21(2)(i)
Authorization Art. 14 (oversight) Art. 5, 22, 25 §164.308(a)(3-4) SoD, access controls Art. 9, 11 Art. 21(2)(a)
Verification Art. 9, 15 Art. 35 (DPIA) §164.308(a)(1) Risk assessment, SoD Art. 5-6 Art. 21(2)(a)
Audit Art. 10, 12 Art. 30, 33 §164.312(b), (c)(1) Audit trail Art. 10 Art. 21(2)(b), 23
Federation Art. 15 (cybersecurity) Art. 5(1)(f) §164.312(e)(1) Art. 28-30 Art. 21(2)(d)

Key Governance Modules by Framework

The governance platform implements 20 modular controls. The diagram below shows which modules satisfy which regulatory framework — allowing you to activate only the modules your regulations require.

KEY GOVERNANCE MODULES × REGULATORY FRAMEWORKS EU AI Act GDPR HIPAA SOX DORA NIS2 Kill Switch PII Redaction Multi-LLM Verify Audit Logs Four-Eyes Approval Encryption at Rest CoT Logging SIEM Integration BYOK / Data Residency = Required by this framework Brighter = stronger requirement

Chapter Summary

Every major regulatory framework — whether designed for AI (EU AI Act), for data protection (GDPR), for healthcare (HIPAA), for financial controls (SOX), for operational resilience (DORA), or for cybersecurity (NIS2) — requires the same architectural capabilities from AI agent deployments: identity, authorization, verification, audit, and federation.

The Five Pillars aren't an abstract framework — they're the minimum viable governance architecture to satisfy regulatory requirements across regulated industries. The control mapping tables in this chapter provide the specific, article-by-article evidence your auditor needs.

The next chapter presents the Reference Architecture — how these controls are implemented as a technical system, from the LLM Gateway pipeline to cascading governance to envelope encryption.