Skip to main content
Research
Engineering & AI18 min read

The Identity Crisis: Your AI Agents Have No Identity and No Accountability

Your AI agents are calling APIs, reading databases, sending emails, and executing code. But who are they? 88% of organisations report confirmed or suspected AI agent security incidents. Only 21.9% treat agents as independent identity-bearing entities. The rest use shared API keys, generic tokens, or nothing at all. NIST just published a concept paper on agent identity. The IETF dropped a draft composing SPIFFE, WIMSE, and OAuth 2.0 into an agent authentication framework. Auth0, WorkOS, and Cloudflare are shipping agent-native auth. This is the engineering guide to giving your agents an identity — before someone else gives them yours.

AuthorAbhishek Sharma· Fordel Studios

Your AI agent just booked a meeting on behalf of a customer. It accessed their calendar, read their email, checked their CRM record, and sent a confirmation. The customer did not authorise any of this. The agent used a shared API key that was provisioned six months ago by a developer who has since left the company. The key has full admin scope. There is no audit trail linking the agent’s actions to a specific user delegation. There is no way to revoke the agent’s access without revoking access for every other agent that shares the same key.

This is not a hypothetical scenario. It is the default state of AI agent deployments in 2026.

Gravitee’s State of AI Agent Security 2026 report surveyed organisations across the US and UK and found that large firms have already deployed an estimated 3 million AI agents, with plans for millions more. Of those agents, only 14.4% went live with full security and IT approval. An estimated 1.5 million agents are running without active monitoring or security controls. 88% of organisations reported confirmed or suspected security incidents involving AI agents in the past year. In healthcare, that number rises to 92.7%.

The root cause is not sophisticated. Agents do not have identities. They borrow credentials from humans, share API keys with each other, and operate in a security grey zone that would be unacceptable for any human user or traditional service.

88%of organisations report AI agent security incidentsSource: Gravitee State of AI Agent Security 2026 — confirmed or suspected incidents in the past 12 months
21.9%of teams treat agents as identity-bearing entitiesThe rest rely on shared API keys (45.6%), generic tokens (44.4%), or no authentication at all
14.4%of deployed agents have full security approvalSource: Gravitee — the remaining 85.6% went live without complete IT and security sign-off
···

The Authentication-Authorisation Gap

The industry has made real progress on authentication — proving that an agent is what it claims to be. OAuth 2.0 and 2.1 flows work. MCP servers support OAuth handshakes. Cloudflare’s Agents SDK integrates the complete OAuth flow directly. If you need an agent to prove its identity to a service, the plumbing exists.

Authorisation is where everything falls apart.

Authentication answers "who is this?" Authorisation answers "what can this entity do, right now, in this context?" For human users, authorisation is role-based, resource-scoped, and contextual. A junior developer can read the staging database but not the production one. An accountant can view financial reports but not edit them. An admin can do both, but only during business hours and only from a corporate network.

For AI agents, authorisation is typically: "here is a token with broad scopes, go do whatever you need to do." Once an OAuth access token is issued with a set of scopes, every action within those scopes proceeds unchecked until the token expires. An agent with an email:send scope authorised to send meeting notes can use that same scope to email every contact in the address book a different message. Each action is technically within scope. The framework treats them identically.

The gap is structural, not accidental. Traditional authorisation systems were designed for deterministic software — programs that execute the same way given the same inputs. AI agents are non-deterministic. They interpret instructions, make judgment calls, and take actions that their developers did not explicitly programme. A traditional API client will never decide to email the CEO. An AI agent might, if the prompt is ambiguous enough and the scope is broad enough.

82% of executives believe their existing policies protect them from unauthorised agent actions. Only 21% have actual visibility into what their agents can access, which tools they call, or what data they touch. The confidence gap is the vulnerability.
Gravitee State of AI Agent Security 2026

The Standards Are Arriving: NIST, IETF, and the Agent Identity Stack

The standards bodies have recognised the gap. In February 2026, NIST’s National Cybersecurity Center of Excellence published a concept paper titled "Accelerating the Adoption of Software and AI Agent Identity and Authorization." The comment period is open through April 2, 2026. This is not aspirational guidance. It is a concrete proposal for a demonstration project that will produce a practical guide using commercially available technologies.

What NIST Is Proposing

NIST’s concept paper identifies four critical capabilities that agent deployments need: Identification (distinguishing AI agents from human users and managing metadata to control the range of agent actions), Authorization (applying OAuth 2.0/2.1 extensions and policy-based access control to define agent rights), Access Delegation (linking user identities to AI agents to maintain accountability), and Logging and Transparency (linking agent actions to their non-human entity for effective audit trails).

The standards under consideration include the Model Context Protocol (MCP), OAuth 2.0/2.1, OpenID Connect, SPIFFE/SPIRE, SCIM, and NGAC (Next Generation Access Control). NIST will also apply SP 800-207 (Zero Trust Architecture), SP 800-63-4 (Digital Identity Guidelines), and NISTIR 8587 (Protecting Tokens and Assertions from Forgery, Theft, and Misuse).

The IETF Draft: AIMS Framework

On March 2, 2026, the IETF published draft-klrc-aiagent-auth-00 — a 26-page framework composing WIMSE (Workload Identity in Multi-System Environments), SPIFFE, and OAuth 2.0 into a coherent agent authentication and authorisation model. The framework is called AIMS: Agent Identity Management System.

The key insight of the AIMS framework is that no new protocols are needed. The existing standards stack — SPIFFE for workload identity, OAuth 2.0 for delegation, Transaction Tokens for context binding — can be composed to handle agent authentication. Transaction Tokens (draft-ietf-oauth-transaction-tokens-08) are particularly important: they are short-lived, signed JWTs that bind user identity, workload identity, and authorisation context to a specific transaction, with lifetimes measured in seconds to minutes.

StandardRole in Agent IdentityCurrent Status
SPIFFE/SPIREAssigns cryptographic identities to workloads (including agents). Enables mTLS between agent-to-service communication. Already used at scale by Uber for billions of daily attestations.Mature. CNCF graduated project. Production-ready.
OAuth 2.0/2.1Handles delegation of authority from user to agent. Defines scopes and consent flows. The core mechanism for "this agent acts on behalf of this user."Mature. Extensions for agent-specific flows are in IETF draft stage.
Transaction TokensShort-lived JWTs that bind user identity, agent identity, and authorisation context to a specific transaction. Limits blast radius of compromised tokens.IETF draft (draft-ietf-oauth-transaction-tokens-08). Active development.
WIMSEDefines how workload identities work across multi-system environments. The conceptual layer above SPIFFE.IETF working group. Specification in development.
MCP (Model Context Protocol)Defines how agents discover and invoke tools. Authentication is baked into the spec via OAuth 2.1. Authorisation is left to implementers.Active. Supported by Anthropic, Cloudflare, and major IDPs.
···

Why Shared API Keys Are a Ticking Time Bomb

45.6% of teams authenticate agents using shared API keys. This is the most common pattern and the most dangerous. A shared API key gives every agent that holds it identical permissions, identical access, and zero individual accountability.

What Shared API Keys Cannot Do
  • Attribute a specific action to a specific agent instance — every agent looks identical to the service
  • Revoke access for one agent without revoking access for all agents sharing the key
  • Apply per-agent rate limits, scope restrictions, or permission boundaries
  • Maintain an audit trail that distinguishes between agents or between an agent and its operator
  • Enforce least privilege — every agent with the key has the key’s full permissions
  • Detect when an agent is compromised — its traffic is indistinguishable from legitimate agent traffic

In a multi-agent system — where a triage agent, a billing agent, and an escalation agent share a common API key to a CRM — a prompt injection attack against the triage agent gives the attacker the CRM permissions of every agent in the system. There is no lateral movement required. The key is the same. The permissions are the same. The blast radius is total.

Compare this to the SPIFFE model, where each agent workload receives its own X.509 SVID (SPIFFE Verifiable Identity Document). Each agent has a unique cryptographic identity. Each identity can be independently scoped, rate-limited, monitored, and revoked. When one agent is compromised, the blast radius is limited to that agent’s permissions. Uber processes billions of SPIFFE attestations daily through SPIRE. This is not experimental infrastructure. It is production-proven at internet scale.

An agent with a shared API key is not a security principal. It is an unattributable capability bearer. You cannot audit what you cannot distinguish. You cannot revoke what you cannot identify.

The Platform Landscape: Who Is Shipping Agent-Native Auth

A new category of identity infrastructure has emerged specifically for AI agents. These are not generic IAM tools with an "agent mode" bolted on. They are purpose-built systems that understand delegation, tool-level permissions, and the non-deterministic nature of agent behaviour.

PlatformApproachKey CapabilityBest For
Auth0 for AI AgentsGenerally available. Extends Auth0’s identity platform with agent-specific authentication and fine-grained authorisation for RAG and tool access.Users authenticate once. Agents inherit scoped permissions. RAG queries respect document-level access control. Agent actions are traceable to the delegating user.Teams already using Auth0. B2C and B2B applications where agents act on behalf of authenticated users.
WorkOSMCP server integration with enterprise SSO, SCIM provisioning, and fine-grained authorisation. Agents authenticate through the same enterprise identity stack as human users.Define scopes and permissions that map directly to MCP tools. Present consent pages to users. Enforce permissions so agents can only invoke permitted tools.B2B SaaS with enterprise customers. Teams that need SSO + SCIM + agent auth in one stack.
Permit.ioFour-Perimeter AI Access Control Framework: prompt filtering, RAG data protection, external access control, and response enforcement.Policies are enforced at four layers — before the prompt reaches the LLM, during retrieval, during tool calling, and on model output. Open-source policy engine (OPAL) with real-time updates.Teams that need policy-as-code for agents. Complex authorisation logic across multiple perimeters.
Cloudflare Agents SDKComplete OAuth 2.1 flow built into the Agents SDK. Integrates with Auth0, WorkOS, and Stytch for authentication. Durable Objects for agent state.Build remote MCP servers with authentication baked in. No custom auth flow implementation needed. Free tier for Durable Objects.Teams building MCP servers on Cloudflare Workers. Developers who want auth out of the box.
NangoPre-built authentication for 700+ APIs. SOC 2 Type II, GDPR, HIPAA compliant. Tokens isolated per user per integration with automatic refresh.Users connect accounts once. Agents act within approved scopes. Token refresh is automatic. Webhooks fire when credentials break.Agents that need to call many third-party APIs. Integration-heavy products with multi-tenant requirements.

Microsoft’s Agent 365

Microsoft announced Agent 365 at RSAC 2026, generally available May 1, 2026. It is a control plane for agents that gives IT, security, and business teams visibility into agent behaviour, identity, and permissions. It extends Microsoft Entra (their identity platform) and Defender to cover agentic workloads. Microsoft also published an updated Zero Trust for AI reference architecture and announced purpose-built capabilities in Defender and Purview for securing agent foundations.

For organisations already invested in the Microsoft identity stack (Entra ID, Defender, Purview), Agent 365 provides the natural extension. For organisations running multi-cloud or non-Microsoft agent stacks, the vendor-neutral standards (SPIFFE, OAuth 2.1, Transaction Tokens) remain the interoperable path.

···

Zero Trust for Agents: The Architecture That Actually Works

Zero Trust was designed for a world where network perimeters do not protect you. AI agents live in a world where there are no perimeters at all. An agent can call any API, access any tool, and operate across cloud providers, SaaS platforms, and on-premise systems in a single workflow. Zero Trust is not just applicable to agents — it is the only security model that makes sense.

Applying Zero Trust to Agent Architectures

01
Give every agent a unique, verifiable identity

Use SPIFFE/SPIRE to assign each agent instance a unique X.509 SVID. This is the foundation. Without unique identity, you cannot attribute actions, scope permissions, or limit blast radius. If you are running on Kubernetes, SPIRE integrates with service mesh (Istio, Linkerd) for mTLS between agents and services. If you are running serverless agents, Cloudflare’s Agents SDK and Auth0’s agent identity provide equivalent identity binding.

02
Implement per-request authorisation, not per-session

A session-scoped OAuth token grants permissions for the token’s lifetime. A per-request model evaluates authorisation at every tool call, every API invocation, every database query. Transaction Tokens (short-lived JWTs bound to a specific transaction) are the mechanism. The overhead is measurable but the blast radius of a compromised token drops from "everything the agent can do for the next hour" to "this one specific action."

03
Enforce delegation chains, not direct access

An agent should never hold long-lived credentials to a service. Instead, the agent should present a delegation chain: "User X authorised Agent Y to perform Action Z on Resource W." The service validates the entire chain — user identity, agent identity, delegation scope, and action request — before granting access. This is how OAuth 2.0 token exchange works. The IETF AIMS framework formalises this pattern for agent workloads.

04
Log every tool call with full identity context

Every action an agent takes should be logged with: the agent’s unique identity (SPIFFE ID or equivalent), the user who delegated authority, the specific tool or API invoked, the parameters passed, and the result returned. This is not optional. Without this, you cannot investigate incidents, demonstrate compliance, or even know what your agents are doing. NIST’s concept paper explicitly calls out "Logging and Transparency" as a core capability.

05
Implement dynamic policy evaluation

Static role-based policies cannot handle agents. An agent’s authorisation should depend on context: what time is it, what has the agent done in this session so far, how many resources has it accessed, does this action pattern match known abuse signatures? Permit.io’s OPAL (Open Policy Administration Layer) and Open Policy Agent (OPA) enable real-time policy evaluation that can adapt to agent behaviour. An InfoQ article details building a least-privilege agent gateway using MCP, OPA, and ephemeral runners for exactly this pattern.

The MCP Authorisation Problem

The Model Context Protocol specifies OAuth 2.1 for authentication. An MCP client (the agent) authenticates with an MCP server (the tool provider) using a standard OAuth flow. This works. The problem is what happens after authentication.

MCP servers expose tools — functions an agent can call. Once authenticated, the agent can discover and invoke any tool the server exposes, subject only to the OAuth scopes granted. There is no tool-level permission model in the MCP specification itself. If an agent has a valid token with appropriate scopes, it can call any tool.

This is where platforms like WorkOS and Gravitee add value. WorkOS maps OAuth scopes directly to MCP tools, so you can define that "Agent X can call tool:read-calendar but not tool:send-email." Gravitee’s MCP Gateway provides an authorisation layer that sits between the agent and the MCP server, enforcing policies before tool calls reach the server.

MCP Authorisation Checklist
  • Map OAuth scopes to individual MCP tools, not to broad categories — "calendar:read" is better than "calendar:all"
  • Implement consent flows that show users exactly which tools an agent will access and what actions it can take
  • Log every tool call with the full delegation context: who authorised it, which agent executed it, what parameters were passed
  • Set per-tool rate limits — an agent that suddenly makes 1,000 email sends in a minute should trigger alerts even if each send is within scope
  • Use an MCP gateway (Gravitee, Cloudflare) to enforce policies centrally rather than relying on each MCP server to implement its own authorisation
  • Rotate and expire agent credentials aggressively — short-lived tokens are worth the operational overhead
···

Multi-Agent Identity: When Agents Call Other Agents

Single-agent identity is hard. Multi-agent identity is an order of magnitude harder. In multi-agent architectures, agents delegate tasks to other agents. An orchestrator agent might invoke a research agent, which calls a data agent, which queries a database. The delegation chain can be three, four, five levels deep.

At each level, the system needs to answer: who initiated this chain? What permissions flow through the chain? If the data agent is compromised, how far back does the blast radius reach? Can the research agent re-delegate permissions it received from the orchestrator to an agent the orchestrator did not authorise?

The IETF Transaction Token draft addresses this directly. A Transaction Token encodes the entire delegation chain — the original user, every agent in the chain, and the cumulative scope restrictions — in a single, short-lived JWT. Each agent in the chain can verify the full provenance of the request without making additional network calls. Permissions can only narrow (never widen) as the chain extends. If the orchestrator has scopes A, B, and C, it can delegate A and B to the research agent, but the research agent cannot grant itself scope C.

Regulated Industries: Where Identity Is Not Optional

In healthcare (HIPAA), finance (SOX, PCI DSS), and insurance (state regulations, NAIC model acts), AI agent identity is not a best practice. It is a compliance requirement. Regulators require that every action on protected data is attributable to a specific identity. A shared API key does not satisfy this requirement. An agent operating without a verifiable identity chain is a compliance violation waiting to be discovered.

RegulationIdentity RequirementAgent Implication
HIPAAAccess to PHI must be attributable to an identified user or system. Audit logs must record who accessed what, when, and why.Every agent accessing patient data must have a unique identity. The delegation chain (patient → user → agent) must be logged. Shared API keys are non-compliant.
SOX / PCI DSSFinancial data access requires individual accountability. Separation of duties must be enforced.Agents handling financial data need distinct identities with per-agent scope restrictions. An agent that reads transactions must not be the same identity that approves them.
GDPR / CCPARight to erasure requires knowing exactly what data an entity holds about a subject. Data processing must have a lawful basis.Agents that process personal data must log what they accessed. Without agent identity, you cannot respond to a data subject access request for data an agent touched.
EU AI ActHigh-risk AI systems require transparency, human oversight, and accountability mechanisms.Agent identity is foundational to accountability. Without knowing which agent took which action, you cannot demonstrate oversight or transparency.

In healthcare specifically, Gravitee’s report found that 92.7% of organisations reported AI agent security incidents. This is the highest rate of any industry surveyed. The combination of sensitive data, strict regulation, and immature agent identity practices creates the highest-risk environment for agent deployment.

···

What to Build This Week

If your agents are currently using shared API keys or generic tokens — and statistically, they probably are — here is the minimum viable identity implementation that moves you from "no accountability" to "auditable and scoped."

Minimum Viable Agent Identity

01
Assign a unique identifier to every agent instance

Every agent that runs should have a unique ID — not a shared service account, not a generic "ai-agent" label. If you are on Kubernetes, deploy SPIRE and issue SPIFFE SVIDs. If you are serverless, use your platform’s workload identity (Cloudflare’s Agents SDK, AWS IAM roles per Lambda). If you are running agents on bare metal, generate a UUID at agent startup and register it in your identity provider. The key requirement: you must be able to answer "which specific agent instance performed this action."

02
Replace shared API keys with per-agent credentials

Rotate every shared API key that agents currently use. Issue per-agent credentials with the narrowest possible scope. Use HashiCorp Vault or your cloud provider’s secrets manager to issue short-lived credentials that auto-expire. For third-party API access, Nango provides pre-built OAuth flows for 700+ APIs with per-user, per-integration token isolation. The goal: no two agents should ever share the same credential.

03
Implement delegation logging

For every action an agent takes, log: the user who delegated authority (or "system" for autonomous agents), the agent’s unique ID, the tool or API invoked, the parameters passed, and the timestamp. This does not require a complex observability stack. A structured JSON log per agent action, shipped to your existing logging infrastructure (Datadog, Grafana, CloudWatch), is sufficient to start. The goal: you must be able to reconstruct the full chain of "who authorised what" for any agent action.

04
Add consent and scope boundaries for user-facing agents

If agents act on behalf of users, implement a consent flow. Before an agent accesses a user’s calendar, email, or documents, show the user exactly what the agent will access and get explicit approval. Auth0’s agent SDK and WorkOS both provide consent page implementations. Map MCP tool names to human-readable descriptions in the consent flow so users understand what they are approving.

05
Plan for per-request authorisation

Session-scoped tokens are the pragmatic starting point, but per-request authorisation is the target architecture. Evaluate Transaction Tokens (IETF draft) and policy engines (OPA, Permit.io’s OPAL, Cedar) for your stack. The migration path is: shared keys → per-agent tokens → per-request evaluation. Most teams are at step one. Getting to step two this quarter and step three this year is a realistic and high-impact roadmap.

···

Where Fordel Builds

We architect and build production AI agent systems for SaaS, finance, healthcare, and insurance clients. Agent identity is a first-class concern in every architecture we design — not a security afterthought bolted on after deployment. We implement SPIFFE-based workload identity for Kubernetes deployments, OAuth 2.1 delegation chains for user-facing agents, per-tool MCP authorisation for tool-calling workflows, and compliance-grade audit logging for regulated industries.

If your agents are running on shared API keys, if you cannot attribute agent actions to specific identities, or if you are deploying in a regulated industry and your agent identity story is "we will figure it out later" — that is the problem we solve. Reach out.

Keep Exploring

Related services, agents, and capabilities

Services
01
AI Agent DevelopmentAgents that ship to production — not just pass a demo.
02
API Design & IntegrationAPIs that AI agents can call reliably — and humans can maintain.
03
Cloud Architecture & DevOpsInfrastructure that runs AI workloads without surprising your budget.
04
AI Safety & Red TeamingFind what breaks your AI system before adversarial users do.
05
AI Product StrategyAvoid the AI wrapper trap. Find where AI creates a defensible moat.
Capabilities
06
AI Agent DevelopmentAutonomous systems that act, not just answer
07
Backend DevelopmentThe infrastructure that makes AI-powered systems reliable
08
Cloud Infrastructure & DevOpsInfrastructure that scales with AI workloads
09
AI/ML IntegrationAI that works in production, not just in notebooks
Industries
10
SaaSThe SaaSocalypse narrative is real and it is not done. Cursor with Claude built Anysphere into a $2.5B company selling to developers who used to pay for multiple separate tools. Bolt, Lovable, and Replit Agent are letting non-engineers ship MVPs in hours. Zero-seat software is emerging — AI agents as the only users of your API, with no human seat count to price against. The "wrapper problem" is killing thin AI wrappers with no moat. Single-person billion-dollar companies are no longer theoretical. Vertical AI is eating horizontal SaaS in category after category. And the great SaaS repricing is underway: customers are refusing to renew at legacy prices when AI does the same job for less.
11
FinanceAI-first neobanks are emerging. Bloomberg GPT and domain-specific financial LLMs are in production. Upstart and Zest AI are disrupting FICO-based credit scoring. Deepfake voice fraud is hitting bank call centers at scale. The RegTech market is heading toward $20B+ as compliance automation replaces compliance headcount. JP Morgan's LOXM and Goldman's AI initiatives are setting expectations for what institutional-grade financial AI looks like — and the compliance infrastructure required to deploy it.
12
HealthcareAmbient AI scribes are in production at health systems across the country — Abridge raised $150M, Nuance DAX is embedded in Epic, and physicians are actually adopting these tools because they remove documentation burden rather than adding to it. The prior authorization automation wars are heating up with CMS mandating FHIR APIs. AlphaFold and Recursion Pharma are rewriting drug discovery timelines. The engineering challenge is not AI capability — it is building systems that are safe, explainable, and HIPAA-compliant at the same time.
13
InsuranceInsurTech 2.0 is collapsing — most of the startups that raised on "AI-first insurance" burned through capital and failed or are being quietly absorbed by incumbents. What is emerging from the wreckage is more interesting: parametric AI underwriting, embedded insurance via API, and agent-first claims processing that handles FNOL to payment without human intervention. The carriers that win will be those that treat AI governance as an engineering requirement under the NAIC FACTS framework, not a compliance afterthought.
14
LegalGPT-4 scored in the 90th percentile on the bar exam. Lawyers have been sanctioned for citing AI-hallucinated cases in federal court. Harvey AI raised over $100M and partnered with BigLaw. CoCounsel was acquired by Thomson Reuters. The "robot lawyers" debate is live, the billable hour death spiral is real, and the firms that figure out new pricing models before their clients force the issue will define the next decade of legal services.