From Confused Deputies to Execution Governance
Authority, Capability Systems, and the Control Plane of Autonomous Systems
Abstract. The governance of autonomous action is a structural problem, not merely a policy problem. This paper traces the intellectual lineage from the confused deputy problem — which revealed the dangers of ambient authority in software systems — through capability-based security, which introduced explicit delegation of authority, to the control-plane architectures that enabled governance at scale in distributed systems. It examines why existing infrastructure governance mechanisms — service meshes, policy-as-code engines, role-based access control, and guardrails — fail to address the specific challenges posed by autonomous systems. The paper argues that governing autonomous action requires a distinct architectural layer: an execution-governance control plane that evaluates every proposed action at the execution boundary using deterministic logic, producing tamper-evident evidence for every decision. It further argues that this layer functions as a control plane for autonomous systems, separating probabilistic reasoning from deterministic authority.
Research programme. The Governance of Autonomous Action establishes the conceptual foundations of execution governance. This paper develops the architecture. The Ambit Infrastructure Thesis examines why this governance layer becomes infrastructure.
Introduction
Modern autonomous systems expose a structural weakness in contemporary computing. They increasingly interact with external tools, APIs, and infrastructure: reading data, sending messages, updating databases, and triggering real-world processes. Yet the mechanisms used to govern these actions are largely inherited from earlier generations of computing — systems designed for human users and relatively predictable software behaviour.
This mismatch between autonomous capabilities and legacy governance models has produced a series of failures that are now widely visible: prompt injection attacks, tool misuse, unintended data exfiltration, and uncontrolled agent interactions. To understand why these failures occur, it is necessary to revisit a sequence of foundational ideas in computer security and systems architecture: the confused deputy problem, capability-based security, and the separation of control and execution in distributed systems. Together, these ideas explain why current governance mechanisms fail and what architecture must replace them.
This paper argues for a specific architectural response. Execution governance is the deterministic evaluation of proposed actions at the execution boundary, before they produce consequences, together with tamper-evident evidence for every decision. The execution boundary is the point at which a proposed action transitions from internal system intention to an operation capable of producing external consequences — a database write, an API call, a message sent, a process triggered.
The intellectual lineage begins with the confused deputy problem.
The Confused Deputy Problem
The confused deputy problem describes a situation in which a program misuses its authority on behalf of another entity. The term was introduced by Hardy (1988) to describe a class of security failures that arise when software components possess authority that they do not carefully distinguish from the authority of their callers.
A classic example illustrates the issue. Consider a compiler program that can write to system billing files. The compiler has the necessary permissions because it must update billing records whenever users submit compilation jobs. A user invokes the compiler and passes a file path argument. If the compiler naively writes to the supplied path without verifying its legitimacy, the user can trick the compiler into writing to a sensitive system file. The compiler has not been malicious, but it has acted as a deputy performing work on behalf of another party. Because the compiler used its own authority rather than the authority of the caller, it became confused.
This problem reveals a deeper structural issue. In traditional security architectures, programs operate with ambient authority — permissions that exist implicitly in the environment rather than being explicitly tied to specific operations. When software executes under ambient authority, it becomes difficult to determine whether an action is truly authorised by the party requesting it or merely enabled by the privileges of the software component executing it.
The confused deputy problem therefore exposes a structural weakness in systems where authority is implicit rather than explicitly delegated. Systems that rely on identity and permission lists often fail to capture the true flow of authority through a program’s execution path. This weakness, first identified in the 1980s, becomes critically dangerous when the deputies themselves are autonomous. In traditional systems, the deputy was deterministic software executing a fixed code path. In autonomous systems, the deputy generates novel actions through probabilistic reasoning — creating confused-deputy failures that no code review could have anticipated.
Capability-Based Security
Capability-based security emerged as a response to these weaknesses. Researchers such as Dennis and Van Horn (1966), along with later contributors including Miller (2006) and Hardy (1988), developed architectures in which authority was represented explicitly through capabilities. A capability is essentially a token that simultaneously identifies an object and grants permission to use it. Possession of the capability is sufficient to exercise the associated authority.
In capability-based systems, authority is not inferred from identity. Instead, it is carried explicitly in the form of capabilities passed between components. A process cannot access an object unless it holds the capability required to do so. This design eliminates ambient authority and ensures that programs only act with the permissions they have been explicitly granted.
Capability systems possess several important properties. They naturally enforce the principle of least privilege (Saltzer & Schroeder, 1975), because programs only receive the capabilities they require. They also significantly reduce the risk of confused deputy failures by eliminating ambient authority, ensuring that programs act only with explicitly granted permissions. Most importantly, capability systems allow developers to reason locally about authority. By examining the capabilities held by a component, it is possible to determine exactly what actions it can perform.
For these reasons, capability-based security was widely regarded as technically superior to traditional access control models. Several experimental operating systems adopted capability architectures, demonstrating strong security guarantees and elegant design principles. The seL4 microkernel (Klein et al., 2009) later demonstrated that capability-based designs could be formally verified — proving that the implementation correctly enforces its security policy.
Yet despite their advantages, capability systems did not become the dominant organising model for mainstream enterprise infrastructure. The reasons for that failure are instructive, because any architecture that governs autonomous action must overcome the same obstacles.
Why Capability Systems Were Marginalised
The marginalisation of capability systems was not caused by theoretical flaws. Instead, it resulted from practical challenges that became increasingly difficult to manage as computing systems grew more complex and distributed.
The most significant challenge was revocation. Once a capability is granted, it may be copied and passed through multiple layers of a system. Revoking that authority requires tracking every instance of the capability wherever it may have propagated. In large systems with complex interactions, this becomes extremely difficult.
Another challenge arose from the rise of distributed computing. Capability systems work best when the entire system is designed around capability semantics. However, modern software environments involve multiple processes, machines, services, and organisations. Passing capabilities safely across these boundaries introduces additional complexity and coordination requirements.
At the same time, enterprise computing environments increasingly favoured identity-based governance models. Access control lists, role-based access control, and identity management systems aligned well with organisational structures and compliance frameworks. These systems allowed administrators to manage permissions through centralised identity records, even though they lacked some of the security advantages of capability-based designs.
As a result, the industry standardised on identity-centric models such as ACLs and RBAC. Capability systems remained active in research, language design, microkernels, and specialised security architectures, but were marginalised from mainstream enterprise infrastructure.
The consequences of that decision now confront us.
The Consequences for Modern Systems
The decision to favour identity-based access control introduced a subtle but important limitation. Access control systems answer the question of who may access a resource. They do not answer the question of whether a specific action should occur at a particular moment.
In many traditional applications, this distinction was manageable. Software components behaved deterministically, and human users initiated most actions. But as systems became more automated and interconnected, authority began to propagate through chains of services, scripts, and workflows. The effective authority of the system became difficult to predict.
Autonomous systems amplify this problem dramatically. Modern agents interact with tools, retrieve information, modify state, and coordinate with other agents. Each of these interactions introduces new paths through which authority can flow. The resulting authority graphs are dynamic, compositional, and frequently unintended — properties that identity-based access control was never designed to govern.
This is not simply a policy failure. It is a structural property of how authority flows through systems composed of autonomous actors. The historical response to this kind of structural problem — from confused deputies to capability systems to distributed control planes — offers both a diagnosis and a direction.
Why Existing Infrastructure Governance Fails
A distributed systems engineer encountering these governance challenges may reasonably ask: why can’t existing infrastructure solve this? Modern infrastructure already includes policy enforcement at multiple layers, and each of these mechanisms solves real problems within its own domain. However, none of them addresses the specific problem of execution governance for autonomous systems.
Service meshes govern transport. A service mesh can enforce mutual TLS, rate limiting, and routing policies between services — and does so effectively. But it operates at the network layer: it can determine whether service A is permitted to call service B, not whether the action that service A is performing on behalf of an autonomous agent should occur. Transport policy cannot evaluate intent.
Policy-as-code engines such as OPA and Cedar evaluate attributes against declarative rules. They answer questions like “does this principal have permission to access this resource?” and can do so with considerable sophistication. A policy engine is a decision component — and a valuable one. However, a decision component is not a governance architecture. Execution governance requires an architecture around the decision: canonicalisation of the proposed action, validation of delegation chains, resolution of temporal authority, deterministic evaluation, and production of tamper-evident evidence. Policy engines do not by themselves establish this architecture.
Role-based access control answers the question of who may access a resource. It does not answer the question of whether this action, now, by this delegate, under these constraints, should be permitted. RBAC maps identities to static permission sets. It cannot express time-bounded delegation, scope narrowing across delegation chains, or the requirement that every decision produce verifiable evidence.
Guardrails — input and output classifiers embedded in the execution path of AI systems — are probabilistic filters. They estimate whether a prompt or response falls within acceptable bounds. But probabilistic classification is not governance. A guardrail cannot produce a deterministic, reproducible decision. It cannot generate a tamper-evident record that a specific action was evaluated against a specific policy at a specific time. And because guardrails are embedded within the execution path rather than separated from it, they do not constitute an independent governance plane.
Each of these mechanisms solves a legitimate problem: transport security, attribute-based access, identity management, content safety. The issue is not that they are useless but that they operate at the wrong boundary. None of them produce tamper-evident evidence that a governance decision occurred. A service mesh logs traffic. A policy engine returns allow or deny. A guardrail filters content. But none of them generate a cryptographically chained record linking a proposed action, the policy under which it was evaluated, the delegation context, and the resulting decision — a record that can be independently verified and replayed.
Execution governance for autonomous systems requires all of these properties simultaneously: evaluation of intent, not just transport; dynamic delegation and temporal constraints, not just static attributes; deterministic decisions, not probabilistic classification; and tamper-evident evidence for every decision.
This is a distinct architectural layer, not an extension of existing infrastructure. It is not merely a policy decision point attached to agent tooling; it is the execution-governance control plane that canonicalises action, validates delegation, resolves current authority, and emits replayable evidence before consequence is admitted.
Authority Propagation
Capability researchers recognised that authority propagates along reference paths. If one component holds a capability to another object, and that object holds a capability to a third resource, the first component can potentially reach that resource through the chain of references.
The effective authority of a system therefore depends not only on explicit permissions but also on the topology of relationships between components. This phenomenon was anticipated in early protection models. Lampson’s (1974) protection-graph framework showed that authority emerges from the structure of relationships between subjects and objects, and that as systems grow more complex, unintended authority paths arise from the composition of otherwise legitimate relationships. In complex systems, the total authority available to a component may exceed the authority originally granted to it. This emergent authority arises from the structure of the system itself. In autonomous multi-agent systems, these authority paths are no longer static — they are created dynamically through shared context, tool outputs, event triggers, and runtime composition.
Autonomous systems create particularly rich authority graphs. Agents communicate with each other, invoke tools, read and write shared memory, and respond to events generated by other agents. These interactions create dynamic networks through which authority can propagate in ways that no single permission grant intended.
A concrete example illustrates the operational danger. Consider two agents operating within the same orchestration framework. The first agent holds a database credential scoped to customer records. The second agent has access to an external email API. Individually, each agent’s authority is well-bounded — one can read data but not transmit it externally, the other can send messages but has no access to sensitive data. But if both agents share a workspace — a common pattern in multi-agent architectures — the first agent can retrieve customer records and write them to the shared context. The second agent can then read that context and transmit the data externally. No single delegation authorised this exfiltration path. It emerged from the topology of relationships between agents, tools, and shared state. The authority graph contained a path that no individual permission grant intended to create.
This is not a hypothetical concern. Multi-agent systems routinely share context through message queues, shared memory, tool outputs, and orchestration state. Each shared surface creates potential authority paths. Traditional access control sees two agents with appropriately scoped permissions. An authority-propagation analysis sees an unintended channel from database to external network.
Understanding and controlling this propagation is the central problem that execution governance must solve.
The Distributed Systems Parallel
A remarkably similar problem emerged in distributed systems engineering. As computing infrastructure scaled to thousands of services and machines, engineers discovered that it was no longer feasible to rely on individual components to enforce policies correctly. Systems became too complex, failures too frequent, and interactions too dynamic for local governance to maintain global invariants.
The solution was the introduction of a control plane.
Distributed systems architecture began to distinguish between the control plane and the data plane. The data plane performs operational work — processing requests, forwarding packets, or executing tasks. The control plane determines how the system should behave by distributing policies, scheduling workloads, and coordinating state.
This separation proved essential for managing large-scale systems. Components in the data plane could remain simple and focused on execution, while the control plane maintained global consistency and governance. The insight was architectural: governance at scale requires a structurally independent layer, not better-behaved components.
Examples of this architecture appear throughout modern infrastructure. In networking, routing protocols form the control plane while packet forwarding constitutes the data plane. In container orchestration systems such as Kubernetes, schedulers and controllers define desired system state, while container runtimes execute workloads. Service meshes introduce policy and routing control planes that govern communication between services.
The key insight behind these architectures is that governance must be separated from execution. When the number of components, the complexity of their interactions, and the consequences of their actions exceed what local enforcement can manage, a structurally independent governance layer becomes necessary — not as an optimisation, but as an architectural requirement.
Autonomous systems have now reached this threshold. The solution follows the same pattern: treat every autonomous action the way distributed systems treat infrastructure changes — through a control plane that evaluates each action before it executes. The architecture is easiest to understand as a control loop: canonical action enters the governance plane at the execution boundary; the governance plane renders allow, deny, or escalate; and every outcome produces evidence.
Autonomous Systems Require a Control Plane
Autonomous systems are beginning to exhibit the same structural challenges that distributed systems faced a generation earlier. Agents perform actions across multiple domains: they call APIs, access data stores, trigger workflows, and interact with external environments. In production deployments, thousands of agents may execute millions of actions per day across shared infrastructure, often under heterogeneous policies, credentials, and delegation contexts. Each of these actions carries potential consequences that extend beyond the agent’s local context.
If governance mechanisms remain embedded within individual components, the system cannot reliably enforce global constraints. Agents may act based on incomplete context, inconsistent policies, or outdated delegation state. Authority may propagate through paths that no single component can observe.
The architecture required to address this problem mirrors the control-plane separation found in distributed systems. Instead of allowing agents to execute actions directly, systems introduce a governance layer that evaluates proposed actions before execution occurs.
In this model, agents generate intentions or proposed actions. These proposals are passed to a governance mechanism that determines whether the action is authorised under current policies, delegations, and constraints. Only actions that pass this evaluation are permitted to execute. Actions that fail are denied, and the denial itself becomes part of the evidence record.
The governance layer thus functions as an execution-governance control plane — structurally independent from the systems it governs, evaluating every action at the execution boundary where intention becomes consequence.
From Capabilities to Execution Governance
Capability-based security introduced the idea that authority should be carried explicitly rather than inferred from identity. This was the right insight, and it remains foundational. However, capability systems primarily focused on making authority portable: a capability token could be passed between components, and possession was sufficient to act.
Autonomous systems require an additional architectural step. It is not enough to know who holds authority; the system must determine whether a specific action is authorised at the moment it would produce consequences. This is the deepest shift from capabilities to execution governance: capability systems made authority portable, while execution governance makes authority re-validated at the moment consequence would occur.
Instead of granting persistent permissions that remain valid until revoked, the system evaluates authority dynamically at the boundary where actions occur. Each proposed action is examined against current policies, current delegation state, and current constraints — not the state that existed when authority was originally granted, but the state that exists now, at the moment of execution.
This evaluation produces both a decision and verifiable evidence that the decision occurred. Authority is not simply granted once and trusted indefinitely. It is verified every time it is exercised. This shifts revocation from token invalidation to fresh boundary evaluation: updated policies and revoked delegations affect future decisions without requiring the system to trace and retract every propagated token.
The resulting architecture satisfies the three properties of a reference monitor (Anderson, 1972): complete mediation — every consequential action must pass through the governance layer; tamper resistance — the governance plane is structurally independent and its evidence is cryptographically bound; and verifiability — the evaluation is a deterministic, replayable pure function. Classical reference monitors mediate access to resources. Execution governance extends this model to action commitment: mediating not whether a subject may reach an object, but whether a proposed action may produce consequences, under what delegation, at what moment. It further strengthens the model by requiring that every mediation produce cryptographic evidence — a property classical reference monitors did not demand.
The Emerging Governance Architecture
Combining these ideas produces a coherent architecture for governing autonomous systems. The architecture is precise and its control loop is explicit.
An autonomous system proposes an action. The action is canonicalised into a standard representation: actor, intent, target, delegation context, and policy identity. Because actions are canonicalised before evaluation, the governance plane is defined by policy and delegation semantics rather than by any specific model, framework, or runtime. The execution-governance control plane evaluates this canonical action against current policies and delegation state using deterministic logic. The evaluation produces exactly one of three outcomes: allow, deny, or escalate. If allowed, the action executes against the target system. If denied, the action is blocked. Whether allowed or denied, the evaluation produces a tamper-evident governance receipt.
Every receipt binds together four components: the proposed action in canonical form; the policy identity and version under which the action was evaluated; the delegation context, including scope, temporal bounds, and revocation status; and the resulting decision together with the reasons that determined it. This binding is cryptographic — altering any component invalidates the receipt. Each receipt is hash-chained to its predecessor (Schneier & Kelsey, 1999), creating an append-only evidence stream that can be independently verified and replayed without access to the originating system. Concretely: a genesis hash anchors the chain, defined as the 64-character hexadecimal string of zeros (the null SHA-256 digest representing “no prior record”). Let be the record obtained by inserting the fields and into at the top level before canonical serialisation. Each subsequent receipt produces:
The chain begins at a fixed genesis value and each subsequent receipt commits to its predecessor. Modification, deletion, or insertion of any receipt breaks the chain — producing a detectable integrity failure.
This architecture resolves several problems that plagued earlier approaches. Because authority is evaluated at execution time, the revocation problem that plagued capability systems dissolves — stale or revoked delegations are caught at the next evaluation, without requiring the system to trace every propagated token. Because governance occurs at the execution boundary, authority propagation across complex agent networks can be examined and constrained. Because decisions produce verifiable evidence, organisations can audit and replay governance decisions independently.
The architecture is also designed to fail closed under degraded conditions. When revocation state is stale and cannot be verified as current, the system denies. When policy distribution lags and a node cannot confirm that it holds the current policy version, it denies until currency is established. When time sources disagree and the temporal validity of a delegation cannot be verified, the system denies. This is a deliberate inversion of the availability-first defaults that characterise most distributed systems. In execution governance, integrity takes precedence over availability. A system that continues to execute under degraded governance conditions is not governed at all. This creates operational tension with availability. The architecture resolves that tension through governance-plane reliability and redundancy, not by relaxing the invariant.
Most importantly, this architecture separates reasoning from authority. Autonomous systems can generate plans and suggestions, but the governance layer retains final control over what actions are permitted.
This architecture can be stated as a single invariant:
Execution Governance Invariant. No action that produces external consequences may execute unless it has been evaluated at the execution boundary against current policy and validated delegation state, and that evaluation has produced tamper-evident evidence.
The evaluation itself is a pure function. Let denote the canonical action input, the validated delegation state, and the policy identity:
where . Given identical inputs, the function produces an identical decision — anywhere, at any time, under any conditions. This purity is what makes governance decisions independently replayable.
The architecture addresses five classes of governance failure: reasoning error, in which an autonomous system proposes an unsafe action; capability escalation, in which delegated authority is widened, stale, or unverifiable; policy drift, in which governance rules change without replayable traceability; execution ambiguity, in which no pre-execution record establishes the authority that enabled an action; and evidence tampering or gaps, in which governance records are absent, incomplete, or altered after the fact.
The Commit Boundary
The execution boundary described above corresponds to what distributed systems engineers recognise as a commit boundary — the point where a proposed state change becomes durable and visible to other systems. Before commit, an operation is provisional and reversible. After commit, it is part of the system’s authoritative history. In autonomous systems, every consequential action crosses such a boundary: a database transaction commits, an API request is accepted, a message is transmitted, a workflow is triggered. The execution-governance control plane sits precisely before that moment.
This correspondence clarifies why deterministic evaluation is not merely desirable but necessary. In transactional systems, the commit decision is deterministic — a database’s constraint checker does not probabilistically decide whether to accept a transaction. The same requirement applies to governance of autonomous action: the reasoning that proposes an action may be probabilistic, but the commit decision must be deterministic. A system that probabilistically decides whether to commit an action cannot prove that the action was authorised. Guardrails, which are probabilistic classifiers, are therefore architecturally misplaced at the commit boundary — they approximate where the architecture requires certainty.
The governance receipt chain is structurally equivalent to a write-ahead log. In transactional systems, the write-ahead log records the decision to commit before the state change takes effect, ensuring that the decision is durable even if the system fails during execution. The governance receipt serves the same function: it records the authorisation decision before the action executes, ensuring that evidence exists independently of whether execution succeeds. Both structures are append-only, ordered, and sufficient to reconstruct the decision history from the log alone. The governance chain extends this with cryptographic binding and hash-chaining — properties drawn from secure audit logs (Schneier & Kelsey, 1999) that make the log tamper-evident rather than merely durable.
In transactional systems, the commit validator is never embedded in the application logic that proposes the state change. The database’s constraint checker is independent of the application. The payment network’s authorisation system is independent of the merchant. The same structural requirement applies: the commit gate for autonomous action must be independent of the reasoning system that proposes the action. This independence is not a security preference — it is a structural requirement of any commit boundary. A system that validates its own commits is a system without external accountability.
A Structural Analogy: Event Streams
The execution governance architecture aligns with event sourcing — a pattern in which system state is represented not as a snapshot but as a sequence of events, each recording a change. Every governance decision becomes an event in a controlled stream, recording not only what action was taken but the decision that authorised it. The governance layer functions as a gatekeeper: only actions that pass evaluation are admitted into the system’s event history. Governance receipts do not replace domain events; they accompany them by recording the authorisation decisions that admitted each consequential action into execution.
This structure satisfies a critical requirement: independent verification. Because the governance logic is deterministic and the evaluation inputs are preserved, an auditor can replay the decision stream and verify that every consequential action was properly authorised. The system does not merely assert that governance occurred; it produces a verifiable historical record that proves it.
Limitations and Open Questions
This paper describes the architecture’s structure and invariants. Several important questions remain open:
-
Governance plane security. The governance plane itself must be secured against compromise. If the evaluation layer is subverted, the invariant is void. The architecture does not yet specify its own protection model.
-
Cross-action composition. Individually permitted actions may compose into sequences that collectively violate policy. The architecture evaluates actions at the execution boundary but does not yet address cross-action governance within a composed workflow.
-
Fail-closed availability cost. The fail-closed principle, while architecturally correct, imposes engineering cost. Governance-plane reliability must be high enough that availability is not sacrificed in practice.
-
Canonicalisation. Two different representations of semantically equivalent actions must produce the same canonical form. The specification of that canonicalisation is a prerequisite for deterministic evaluation.
-
Policy correctness. Deterministic evaluation guarantees that every action is assessed consistently under the stated policy, but does not guarantee that the policy itself is correct. A deterministically evaluated bad policy produces reproducible but wrong decisions. The architecture ensures governance is verifiable, not that it is wise.
These are engineering and specification challenges, not architectural flaws, but they must be addressed before the architecture can be considered complete.
Conclusion
The governance challenges of autonomous systems are not new. They are the convergence of longstanding ideas in security and systems design. The confused deputy problem revealed the dangers of ambient authority. Capability-based security made authority explicit and portable. Distributed systems engineering demonstrated that governance at scale requires structural separation from execution.
These ideas converge on a single architectural requirement: an execution-governance control plane that evaluates every proposed action at the execution boundary using deterministic logic, producing tamper-evident evidence for every decision. This architecture transforms governance from a collection of policies and monitoring tools into a structural property of the system itself.
Every previous generation of computing infrastructure encountered this inflection: the moment when the scale and autonomy of the system exceeded the capacity of embedded controls, and governance became an independent architectural layer. Identity infrastructure, network control planes, payment authorisation networks — each emerged when the cost of ungoverned action became structurally unacceptable. Autonomous systems have reached that threshold. The execution-governance control plane is the architectural response.
References
Anderson, J.P. (1972). Computer Security Technology Planning Study. Technical Report ESD-TR-73-51, Air Force Electronic Systems Division.
Dennis, J.B. & Van Horn, E.C. (1966). “Programming Semantics for Multiprogrammed Computations.” Communications of the ACM, 9(3), 143–155.
Hardy, N. (1988). “The Confused Deputy: (or why capabilities might have been invented).” ACM SIGOPS Operating Systems Review, 22(4), 36–38.
Lampson, B.W. (1974). “Protection.” ACM SIGOPS Operating Systems Review, 8(1), 18–24.
Klein, G., Elphinstone, K., Heiser, G., Andronick, J., Cock, D., Derrin, P., Elkaduwe, D., Engelhardt, K., Kolanski, R., Norrish, M., Sewell, T., Tuch, H. & Winwood, S. (2009). “seL4: Formal Verification of an OS Kernel.” Proceedings of the 22nd ACM Symposium on Operating Systems Principles (SOSP ‘09), 207–220.
Miller, M.S. (2006). Robust Composition: Towards a Unified Approach to Access Control and Concurrency Control. PhD dissertation, Johns Hopkins University.
Saltzer, J.H. & Schroeder, M.D. (1975). “The Protection of Information in Computer Systems.” Proceedings of the IEEE, 63(9), 1278–1308.
Schneier, B. & Kelsey, J. (1999). “Secure Audit Logs to Support Computer Forensics.” ACM Transactions on Information and System Security (TISSEC), 2(2), 159–176.