Authority Is Not Transitive
You tell a system: “Process this insurance claim.”
The system reads the customer’s full medical history. It contacts an external fraud-assessment service. It transmits a subset of that medical data to the assessor. The assessor returns a risk score. The system uses it to adjust the payout.
Question: who authorised the transmission of medical records to a third party?
You did not. You said “process this claim.” The system decided what that required — which records to access, which services to contact, what data to send where. You authorised a goal. The system chose the actions.
This distinction is easy to miss. And missing it is the most common gap in how people think about AI systems.
The assumption
Here is how most people reason about authority in AI systems:
A human configured the workflow. A human triggered the process. A human approved the task. Therefore, everything the system does is covered by that human’s authority.
It feels right. But it collapses two different kinds of authority into one.
Two kinds of authority
There is a difference between:
- the authority to initiate — “process this claim”, “sync this data”, “handle this request”;
- the authority to act — read this record, call this API, send this data to this external system, in this order, at this moment.
The first is a statement of intent. It names a desired outcome.
The second is a series of concrete operations with real consequences. Each one touches a real system. Each one may cross a boundary — between internal and external, between sensitive and public, between what was intended and what was not.
These operations are not listed in the original instruction. They are determined by the system at runtime. The caller did not specify them. In many cases, the caller cannot enumerate them.
The step nobody specified
Between instruction and execution sits a transformation:
intent → actions
If this transformation were fully specified — if every action were defined in advance — the system would be a script. There would be no need for orchestration, reasoning, or language models.
The reason these systems are valuable is precisely because they determine how to achieve a goal. They interpret. They adapt. They choose.
But the moment a system chooses how to act, it is no longer relaying the caller’s authority.
It is deciding how authority is applied.
That transformation is not explicitly authorised. It is resolved.
Nothing in the system evaluates whether the transformation itself produces only authorised actions.
Walk through it
Stay with the insurance claim.
You say: “Process this claim.”
Step 1. The system reads the customer’s full record — not just the claim, but contact details, prior claims, medical notes. You did not specify which fields to read. The system chose.
Step 2. The system calls an external fraud-assessment service. You did not name this service. The system selected it based on its configuration and the claim type.
Step 3. The system transmits a subset of the customer’s medical data to the assessor. You did not authorise this specific transmission. The system determined it was necessary.
Step 4. The assessor returns a risk score. The system uses it to reduce the payout. You did not define this decision logic. The system applied it.
Each step is technically permitted by infrastructure — the system has API keys, database credentials, network access.
But that is capability. Not authority.
You authorised “process this claim.” You did not authorise “send this patient’s medical history to this third-party service at 14:23 on a Tuesday.”
The system determined that.
The principle
Authority is not transitive.
It does not propagate unchanged from intent to execution. At each step, intent is interpreted and actions are selected. Each interpretation introduces judgement — a point where what was authorised can diverge from what is executed.
This is not a theoretical concern. It is the structural reality of any system that determines its own actions.
Why identity does not solve it
Identity and access management answers one question:
“Who is allowed to start this process?”
It does not answer:
“Is this specific action — chosen by the system, not by the caller — within the scope of what was actually authorised?”
The first governs initiation. The second governs consequence. Nearly every system today governs only the first.
The result: the system has a key that opens every door it might need. What it lacks is anything checking whether it should walk through this door, right now, for this reason.
Why this is new
Traditional software did not have this problem. A human clicked a button. The code executed a fixed sequence. The human’s authority covered the outcome because the outcome was fully determined by the input. There was no gap between intent and action.
Autonomous systems break this. The system receives an intent and produces a sequence of actions that were not predefined. The actions depend on context, state, retrieved data, and — in systems using language models — probabilistic reasoning.
This creates a gap between what was authorised and what is executed.
The gap is not a bug. It is the nature of autonomy. It is the reason these systems are useful. And it is the reason they require governance at a point that traditional systems never did: at every individual action, at the moment of execution.
The invariant
A system that cannot evaluate whether each action is authorised — at the moment that action is about to produce a consequence — is not governed.
It is trusted.
And trust, in the absence of verification, is not a control. It is a hope dressed as architecture.