Chapter 02 of 08

The Maturity Model

Five levels of AI governance maturity. Most enterprises are at Level 1 or 2. Regulated industries need Level 4+. Here's how to get there.

Why a Maturity Model?

Governance isn't binary. You don't go from "ungoverned" to "fully compliant" in one step. Organizations need a framework to assess where they are, define where they need to be, and chart the path between — with measurable milestones at each stage.

The AI Governance Maturity Model (AGMM) defines five levels. Each level builds on the previous one. Each level delivers tangible value. The goal isn't perfection — it's continuous improvement with verifiable progress.


The Five Levels

AI GOVERNANCE MATURITY MODEL — 5 LEVELS L1 Ad-hoc No governance L2 Experimental Pilot + policy L3 Managed Central platform Identity + audit L4 Governed Crypto identity Per-call authz Compliance packs Verification ← REGULATED TARGET L5 Industrial Cross-org federation Trust verification Agent marketplace

Level 1 — Ad-hoc

Characteristics: Individual employees use AI tools. No central inventory. No policy beyond "don't share secrets." No audit trail. Management doesn't know which AI tools are in use or what data they access.

DimensionState at Level 1
InventoryNobody knows what AI tools are in use
IdentityShared API keys or personal accounts
AuthorizationFull access or no access
AuditNone, or application-level logs only
ComplianceAI not mentioned in compliance program
Cost controlUnknown spend, charged to individual credit cards
Incident response"Turn it off" (if anyone knows where "it" is)

Where most enterprises are in 2026

McKinsey's 2025 State of AI report found that while 23% of organizations are scaling agentic AI, 90% of transformative use cases remain stuck in pilot mode. Only 37% of organizations have AI governance policies (ISACA, 2025). Gartner predicts over 40% of agentic AI projects will fail by 2027 due to governance and control issues. If your organization is at Level 1, you're not behind — you're normal. But "normal" is no longer safe. Governance spending is projected to reach $492 million in 2026 (Gartner) because the market has realized the gap is existential, not optional.

Level 2 — Experimental

Characteristics: IT acknowledges AI usage. A pilot program exists. Some tools are sanctioned. An AI policy is written. But enforcement is manual and sporadic. Audit trails exist for sanctioned tools only.

DimensionState at Level 2
InventoryPartial — sanctioned tools known, shadow AI still exists
IdentityService accounts for official tools, personal accounts for the rest
AuthorizationCoarse-grained (admin/user), per-application
AuditApplication-level logs for sanctioned tools
ComplianceAI mentioned in policy, but no technical controls
Cost controlDepartmental budgets, no per-agent attribution
Incident responseDisable the service account (1-4 hour response)

Level 2 is where most "AI-forward" enterprises land after their first governance initiative. It feels like progress — and it is — but it leaves critical gaps. Shadow AI still exists alongside the official program. Authorization is too coarse to enforce least-privilege for agents. Compliance is based on policy, not enforcement.

Level 3 — Managed

Characteristics: Central AI platform with agent inventory. Per-agent identity (service accounts with scoped permissions). Tool-level authorization policies. Centralized audit logging. Cost attribution per agent. Manual compliance checks.

DimensionState at Level 3
InventoryComplete — all agents registered in central platform
IdentityPer-agent service accounts with unique identifiers
AuthorizationPer-tool policies (e.g., "Agent X can read CRM but not write")
AuditCentralized, searchable audit logs for all agent actions
ComplianceManual compliance checks; evidence collection is semi-automated
Cost controlPer-agent cost tracking and budget alerts
Incident responseKill switch per agent, team, or tenant (seconds, not hours)

Level 3 is the minimum for production deployment in non-regulated industries. You know what agents exist, what they can do, what they did, and how much it cost. You can stop any agent instantly. This is the "table stakes" level for taking AI agents seriously.

Level 4 — Governed

Characteristics: Cryptographic agent identity. Fine-grained authorization with default deny. Cascading governance policies from organization to individual agent. Automated compliance with framework-specific controls. Mathematical verification of agent behavior. Tamper-evident audit trails.

DimensionState at Level 4
InventoryComplete with lifecycle management (create, deploy, pause, retire)
IdentityCryptographic (SPIFFE IDs, Verifiable Credentials, JWT-SVIDs)
AuthorizationPer-tool-call authorization (OpenFGA/Zanzibar). Default deny. 190+ tool policies.
AuditHash-chained, HMAC-verified, tamper-evident. SIEM-exportable. Separate audit DB.
ComplianceAutomated: governance packs per framework (GDPR, HIPAA, SOX, EU AI Act, DORA). Evidence auto-collected.
Cost controlPer-call metering, per-agent budgets, spending policies with approval gates
Incident responseKill switch hierarchy (agent → team → tenant). Cascading. Auto-notification.
VerificationMulti-LLM cross-checking (PVP). Policy-as-Code with cryptographic execution certificates.

Level 4 is the target for regulated industries

If your organization is subject to GDPR, HIPAA, SOX, DORA, EU AI Act, or NIS2, Level 4 is not aspirational — it's required. The specific governance controls map directly to regulatory obligations. Chapter 4 (Regulatory Landscape) provides the detailed mapping.

Level 5 — Industrial

Characteristics: Cross-organizational agent federation. Trust verification across company boundaries. Per-call skill marketplace. Agent reputation scores. Automated compliance certification. The "Internet of Agents" operating at industrial scale.

DimensionState at Level 5
InventoryFederated directory across organizations (AGNTCY, OASF)
IdentityCross-org verification via SPIFFE trust bundles + OAuth 2.0 Token Exchange
AuthorizationCross-org TBAC (Tool-Based Access Control) with delegation chains
AuditCross-org audit correlation. Federated evidence packages.
ComplianceGovernance certifications (e.g., "GDPR Verified Agent"). Cross-org compliance attestation.
FederationSLIM protocol for cross-org messaging. MLS encryption (RFC 9420). Circuit-breaker health monitoring.
EconomicsPer-call skill marketplace. Agent trust scores. Reputation-weighted routing.

Level 5 is emerging. Standards are being defined (AGNTCY/Cisco, Linux Foundation AI Card). Early implementations exist. Most organizations should target Level 4 first and plan for Level 5 as the ecosystem matures.


The Maturity Assessment Matrix

Use this matrix to assess your organization's current state. For each dimension, identify which level best describes your current reality — not your aspirations or your policy documents, but what actually happens day-to-day.

Dimension L1 L2 L3 L4 L5
Agent Inventory Unknown Partial Complete + Lifecycle + Federated
Identity None Shared keys Per-agent ID Cryptographic Cross-org
Authorization None Admin/User Per-tool Per-call + TBAC Cross-org delegation
Audit None App-level Centralized Hash-chained Federated
Compliance None Policy doc Manual checks Automated packs Cross-org certs
Cost Control Unknown Departmental Per-agent Per-call + gates Marketplace
Incident Response Find the terminal Disable account Kill switch Kill hierarchy Cross-org halt

Scoring: Count the number of dimensions at each level. Your overall maturity is the lowest level where you have all dimensions covered. If your identity is at L3 but your audit is at L1, your effective maturity is L1. The chain is only as strong as its weakest link.

Take the full interactive assessment

This table is a simplified version. The MeetLoyd AI Governance Readiness Assessment provides a detailed, weighted evaluation across 25 criteria with a personalized report and recommendations.


The Path Forward

The maturity model isn't a scorecard — it's a roadmap. Each level is a stable plateau where the organization delivers value while building toward the next level. You don't need to reach Level 4 before deploying agents. You need to know you're at Level 1, have a plan to reach Level 3 in weeks (not years), and a path to Level 4 when regulation demands it.

Common transition patterns

L1 → L3 in 2-4 weeks (platform-assisted)

Deploy a managed agent platform with built-in identity, authorization, and audit. Skip Level 2 entirely — there's no value in partial governance. A good platform gives you Level 3 on day one.

L3 → L4 in 4-8 weeks (governance activation)

Enable compliance packs for your regulatory frameworks. Upgrade identity to cryptographic. Activate per-call authorization with default deny. Turn on hash-chained audit. The infrastructure was there from L3 — you're activating controls, not building them.

L4 → L5 when the ecosystem is ready

Cross-org federation requires the other organization to be at L4 too. Standards (SLIM, AGNTCY, OASF) are maturing. Early adopters are deploying federation bridges. Plan for it, but don't block on it.


Chapter Summary

The AI Governance Maturity Model provides five levels of increasing capability: Ad-hoc, Experimental, Managed, Governed, and Industrial. Most enterprises are at Level 1-2. Regulated industries need Level 4. The path from Level 1 to Level 3 can take weeks with the right platform. The path from Level 3 to Level 4 is primarily about activating governance controls that already exist in the infrastructure.

The next chapter deep-dives into the Five Pillars of AI Governance — Identity, Authorization, Verification, Audit, and Federation — the architectural foundations that make Level 4+ possible.