← Blog

We Are Debating Governance Without Defining It

The AI governance conversation is stuck. Not because the problems are too hard. Because the participants are solving different problems using the same word.

When a policy researcher says “governance”, they mean ethical frameworks and regulatory compliance. When a security engineer says “governance”, they mean access controls and permissions. When an infrastructure architect says “governance”, they mean runtime monitoring and anomaly detection.

Each definition is coherent. Each solves a real problem. Each describes something fundamentally different from the others.

This is not an intellectual disagreement. It is a vocabulary failure.

Three meanings

The word governance currently covers at least three distinct concerns in autonomous AI systems.

The first is policy and ethics. Organisations establish principles about what their systems should and should not do. They produce risk assessments, ethical guidelines, compliance documentation, and regulatory submissions. This work operates at the organisational level, evolves on the timescale of quarters and regulatory cycles, and governs intent — what the organisation has decided its systems should do.

The second is security and access control. Engineers define who can access what resources, what operations each identity may invoke, and what credentials are required. This work operates at the identity boundary. Decades of practice inform it.

The third is behavioural monitoring and risk detection. Teams build systems that observe what has happened — activity patterns, anomaly scores, usage trends — and flag deviations from expected behaviour. This work operates after execution, watching the stream of events for signals that something has gone wrong.

All three are necessary. All three are real engineering disciplines. None answers the same question.

The question nobody is asking

There is a question that sits between these three disciplines. It is the simplest question you can ask about an action an autonomous system is about to take.

Is this specific action permitted to execute right now?

Not “is this type of action generally appropriate?” That is policy. It was answered weeks ago in a document.

Not “does this identity have access to this resource?” That is security. It was answered when credentials were provisioned.

Not “does this pattern look unusual?” That is monitoring. It will be answered after the action has already occurred.

The question is narrower. This specific action, proposed at this specific moment, under this specific delegation, evaluated against this specific policy — is it permitted to execute?

Why the gap is structural

Consider a concrete scenario. An autonomous system processes insurance claims. An operator instructs it to review pending claims and approve those under $5,000 that meet standard criteria.

The system begins working. It reads claim files, evaluates them, and starts approving claims. Everything looks correct.

But follow the execution closely. The operator authorised a workflow. The system decomposed that workflow into individual actions: read a record, evaluate criteria, update a status, send a notification, initiate a payment. Each action has different consequences. Reading a record and initiating a payment are categorically different operations.

The operator expressed intent. The system selected actions. Those actions were chosen at runtime, not specified in advance. Intent and actions are not the same thing.

This is not incidental. It is the reason the system is autonomous. In Authority Is Not Transitive, we examined why: if every action were defined in advance, the system would be a script. The value of autonomy is that the system determines how to achieve a goal. It interprets. It selects. It acts.

But the moment a system selects its own actions, those actions cannot be validated at design time — because they do not exist at design time. They are chosen at the moment of execution, based on context the designer never saw.

This is why the gap between the three layers is structural, not patchable. Policy governs categories decided in advance. Security governs access boundaries defined at provisioning. Monitoring observes patterns after execution. None evaluates the specific action the system has just selected — because that action did not exist until the system chose it.

Three problems, not one

This is not three versions of the same problem. It is three distinct problems.

Security determines what is possible. Given an identity and its credentials, what resources and operations are available? This is the outer boundary. It is necessary and well solved.

Monitoring determines what happened. Given a stream of observed activity, what looks anomalous or concerning? This is the feedback loop. It is necessary and improving rapidly.

Neither evaluates individual actions before they produce consequences. Security establishes the perimeter. Monitoring watches the stream. Between the two — at the moment an action is about to produce an external effect — there is a gap.

That gap is where governance of autonomous AI action belongs. It evaluates a specific proposed action against policy and delegation, produces an explicit decision, and creates evidence that the evaluation occurred — before the action executes.

This is not a criticism of security or monitoring. They solve their own problems well. It is the recognition that between them sits a third problem, and that problem has no name the industry agrees on.

Why the debate goes nowhere

Most governance debates are not disagreements. They are three conversations running in the same room, using the same vocabulary.

When someone argues governance needs stronger access controls, they are talking about security. When someone argues governance needs better anomaly detection, they are talking about monitoring. When someone argues governance needs pre-execution evaluation of individual actions, they are talking about action authority.

All three are right about their own problem. None is right about all three.

The confusion is structural, not intellectual. The industry has one word where it needs three. Until the vocabulary separates, every governance conversation will include participants who agree the problem is important but cannot converge on what the problem is — because they are describing different layers of the same system as if they were the same layer.

The test

The distinction is simple enough to test.

If a system can determine who has access to a resource, it has security. If a system can detect that something unusual occurred, it has monitoring. If a system can prevent a specific action from executing based on policy evaluation — and prove that it did — it has governance.

A system that cannot prevent an action is not governing it. It is observing it.