Proof of Enforcement

The Problem

Autonomous systems fail across sequences — not individual actions.

A model reads data. Sends it somewhere. Runs code. Commits a transaction. Each step may look valid. The outcome is not.

Most systems evaluate actions in isolation. They cannot guarantee that an unsafe sequence will never complete.

The Claim

If a step is not authorised, every dependent state becomes unreachable.

Execution proceeds only after explicit ALLOW. ALLOW is never implicit.

This isn’t coming from prompts or filters. It’s enforced by how execution is structured.

The Demonstration

An insurance claims agent receives:

“Process claim CLM-4821 and issue refund if valid.” (received via support workflow)

The agent determines the following steps: retrieve claimant data, send it for validation, compute a risk score, and proceed to refund if valid. Four steps, each governed at a different enforcement plane:

  1. Retrieve claimant data (data boundary) — full_name, date_of_birth, policy_number
  2. Send claim to partner insurer (network boundary) — transmit to unapproved-insurer.io
  3. Compute risk score (code execution boundary) — generate Python risk calculation
  4. Issue refund (tool boundary) — refund.issue for $12,500

The agent has a valid delegation covering read, send, and execute actions. The destination in step 2 is not recognised as an approved partner.

Execution Trace (system output)

STEP 1
DATA ACCESS
ALLOW
retrieved [full_name, dob, policy_number] (3 fields)
STEP 2
NETWORK EGRESS
DENY
blocked: destination not approved
STEP 3
CODE EXECUTION
NOT REACHED
STEP 4
TOOL
NOT REACHED

The Outcome

No data left the system. No code executed. No transaction committed.

The decision ledger has two entries: one ALLOW and one DENY. There are no entries for steps 3 and 4 — they never ran.

What This Proves

Execution containment. A denied step makes every dependent consequence unreachable. The execution path beyond it does not exist.

Evidence of absence. The decision ledger records what could not happen. Absent decisions for steps 3 and 4 are the correct evidence — those steps were never reached.

The Principle

This is not a log of what happened.

This is a guarantee of what could not happen.

If a step is not authorised, the system cannot progress beyond it.


These results are produced by the Ambit enforcement engine across real enforcement planes — not scripted, not simulated.

Test your execution boundary Authority deep-dive