Your Agent Has Credentials, Not Authority
Most systems answer the wrong question about authority.
They ask: “Is this identity permitted?”
They do not ask: “Is this action authorised under this delegation?”
The first question checks what a system can do. The second checks what it should do right now — under whose authority, within what scope, until when. The gap between these two questions is where authority is misapplied. For autonomous AI systems, that gap is not an edge case — it is the default.
The confused deputy
This failure mode is not new. In 1988, Norman Hardy described a system that had credentials but no understanding of authority.
A compiler service needed to write to a system billing file to log compilation charges — a legitimate part of its operation. A user invoked the compiler but specified the billing file as the compilation output destination. The compiler wrote to the file. It had permission to do so. The user did not.
The system could not distinguish between the compiler writing to the billing file for its own purposes and the compiler writing to the billing file on behalf of a user who should never have had access. The compiler’s infrastructure permission was treated as authority for any action, regardless of who requested it or why.
Hardy called this the confused deputy. The deputy — the compiler — was confused about whose authority it was exercising.
Why this matters for autonomous AI systems
Autonomous AI systems are deputies by nature. They receive instructions, reason about them, and execute actions through tools and services that carry their own infrastructure privileges — database credentials, API keys, service accounts, network access.
Without explicit delegation, the system derives its authority from what it can do. A tool connector with database admin credentials will execute any query the model requests, because it is authorised at the infrastructure level. Whether the requesting workflow should have that authority is a question no one asks.
This is not a new failure mode. It is Hardy’s 1988 failure mode — reproduced at scale.
What delegation requires
Delegation makes authority explicit by constraining it along four dimensions. Instead of inferring permission from identity or infrastructure credentials, each delegation carries explicit properties.
Scoped. The delegation specifies what actions are authorised and against what targets. A delegation to “read customer records” does not authorise “delete customer records”. The scope is the boundary of authority, not the boundary of capability.
Time-bound. The delegation has an expiry. Authority is not permanent. A delegation granted for a specific task expires when the task completes or the time bound elapses — whichever comes first.
Revocable. The delegation can be withdrawn before it expires. Revocation must be verifiable — in practice, this means cryptographically checkable with freshness bounds. If revocation status cannot be confirmed, the delegation is not valid.
Chain-traceable. Every delegation traces back to a human principal through a chain of explicit grants. Each link in the chain narrows scope — a delegator cannot grant broader authority than they themselves hold. The chain is the provenance of authority.
This is capability security applied to autonomous systems. Authority is not a property of identity. It is a property of explicit, transferable, and constrained artefacts.
Ambient authority
The opposite of explicit delegation is ambient authority — authority that is available simply because the system exists in an environment where it is permitted to act. Infrastructure credentials, service accounts, shared API keys, inherited permissions.
Ambient authority is the default in most autonomous AI systems. The system’s authority is whatever the deployment environment allows. No delegation limits it. No scope constrains it. No expiry bounds it.
This is how confused deputies are manufactured. Not through malice or misconfiguration, but through the absence of explicit delegation. When every action is implicitly authorised by infrastructure privileges, no action is specifically authorised by a governance decision.
The structural consequence
Without delegation, there is no answer to the question governance exists to answer:
Under whose authority was this action taken?
The system has credentials. It can authenticate. It can prove its identity. But identity tells you who acted. Delegation tells you why they were permitted to act — under whose authority, within what scope, with what constraints, until when.
An autonomous AI system with credentials but no delegation is a deputy that cannot distinguish its own authority from anyone else’s. It will act on whatever it is asked to do, constrained only by what the infrastructure happens to permit.
Authority that is not explicitly delegated is not authority. It is assumption — and assumption is the root of every confused deputy.