February 2026

Enterprise AI adoption has outpaced governance capabilities, creating existential compliance and security risks. Current approaches — prompt-layer filtering, post-execution monitoring, and vendor trust are fundamentally reactive, allowing policy violations to occur before detection. We propose Least Context Access Control (LCAC), a framework for pre-execution AI governance that enforces policy before requests execute.

LCAC introduces three enforcement layers: policy intent (defining allowable actions), enforcement mode (shadow observation vs. active blocking), and semantic validation (outcome verification). By separating policy definition from enforcement, organizations can observe AI usage patterns before implementing controls, reducing false positives while maintaining security.

Drawing parallels to Zero Trust security and network control planes, we demonstrate how LCAC shifts AI governance from detection to prevention. We present implementation patterns for multi-provider enterprise environments and discuss implications for regulatory compliance, data residency, and organizational control. Our framework addresses the gap between AI innovation speed and governance maturity, enabling enterprises to adopt AI without sacrificing security or compliance.

The enterprise AI governance crisis is not coming — it is here. Organizations are deploying ChatGPT, Claude, Gemini, and proprietary models across departments at unprecedented speed, yet 73% of enterprises report having no centralized AI governance framework [Gartner, "Innovation Insight: Generative AI Governance Platforms," May 2023.]. The gap between AI adoption and control capabilities represents an existential risk for regulated industries, where a single data leak can trigger millions in fines and irreparable reputational damage.

Current governance approaches fall into three categories, all fundamentally flawed:

1. Prompt-layer filtering attempts to block dangerous inputs before they reach AI models. Users bypass these filters through iterative rephrasing or multi-step queries that individually appear benign.

2. Post-execution monitoring detects policy violations after they occur. By the time a SIEM alerts on PII in an AI request, that data has already been transmitted to external providers and potentially logged in their systems.

3. Vendor trust outsources governance to AI providers through contractual agreements and security certifications. When provider terms change — as OpenAI's did regarding training data usage in 2023 — organizational compliance posture shifts without internal visibility or control.

These approaches share a common failure mode: they are reactive rather than preventive. They answer "what happened?" instead of "what should be allowed to execute?"

This paper introduces Least Context Access Control (LCAC), a framework for pre-execution AI governance that enforces policy before requests run. LCAC shifts the governance question from detection to authorization, establishing execution-layer control similar to how firewalls authorize network traffic and Zero Trust frameworks verify identity before access.

Our contributions are threefold:

• We define Least Context Access Control (LCAC) as a formal framework for pre-execution AI governance, introducing policy intent, enforcement mode, and semantic validation as distinct architectural layers.

• We demonstrate how LCAC enables shadow-mode observation before enforcement, allowing organizations to build accurate policy models without disrupting existing AI usage.

  • We provide implementation patterns for multi-provider enterprise environments, addressing provider abstraction, tenant isolation, and audit trail requirements.

The remainder of this paper is organized as follows: Section 2 reviews related work in AI governance, Zero Trust security, and network control planes. Section 3 defines the LCAC framework and its three enforcement layers. Section 4 presents implementation patterns and architectural considerations. Section 5 discusses evaluation, limitations, and future work. Section 6 concludes.

AI governance research has largely focused on three areas: model safety [Bai, Y., et al., "Constitutional AI: Harmlessness from AI Feedback," arXiv:2212.08073, December 2022.], fairness and bias mitigation [cite relevant ML fairness papers], and content moderation [OpenAI, "GPT-4 System Card," March 2023.]. While critical, these approaches address model behavior rather than organizational control.

Industry solutions have emerged in three categories:

Prompt filtering systems analyze user inputs for patterns associated with prompt injection, jailbreaking, or policy-violating content. These operate at the application layer, attempting to sanitize requests before they reach AI models. However, adversarial prompt engineering has proven highly effective at bypassing static filters.

Observability platforms provide logging, tracing, and analysis of AI interactions. These tools excel at debugging and performance optimization but operate post-execution, limiting their utility for prevention.

Governance frameworks provide policy guidance but lack technical enforcement mechanisms. Organizations must translate high-level principles into specific controls, a gap that LCAC addresses.

Zero Trust security, formalized by Forrester Research [cite] and standardized by NIST SP 800–207, operates on the principle "never trust, always verify." Traditional perimeter-based security assumed internal network traffic was trustworthy; Zero Trust treats all requests as potentially malicious regardless of origin.

The parallel to AI governance is direct: traditional approaches assume AI requests from authenticated users are safe; LCAC verifies each request against policy before execution. This shift from implicit trust to explicit authorization is the foundation of our framework.

Software-Defined Networking (SDN) separated network control (deciding where traffic flows) from data forwarding (moving packets). This control plane abstraction enabled centralized policy management across heterogeneous network infrastructure.

LCAC applies this pattern to AI: separating execution control (deciding what AI requests run) from execution (model inference). Just as SDN controllers enforce network policy, LCAC enforces AI policy before execution occurs.

LCAC is built on three foundational principles:

1. Pre-execution authorization: Policy decisions occur before AI requests execute, not after results are generated.

2. Least context exposure: Requests receive the minimum context (data, model access, provider routing) necessary to accomplish their intended purpose.

3. Separable enforcement: Policy intent exists independently from enforcement mode, enabling observation before blocking.

These principles address the fundamental limitation of reactive governance: by the time a violation is detected, the damage has occurred.

LCAC consists of three distinct layers:

Layer 1: Policy Intent

Policy intent defines what should be allowed, independent of whether violations are blocked. Policies are expressed as rules mapping (identity, context, intent) → (allowed_providers, allowed_models, data_constraints).

Example policy: "HR department requests containing PII may only execute on internal models with data residency in US-East."

Policies are declarative — they state desired outcomes without specifying enforcement mechanisms. This separation enables the same policy to operate in shadow mode (observe violations) or enforcement mode (block violations) without modification.

Layer 2: Enforcement Mode

Enforcement mode determines how policy violations are handled:

• Shadow mode: Violations are logged but requests proceed. This enables policy tuning without operational disruption.

• Enforcement mode: Violations block request execution or trigger alternative routing (e.g., redirect to compliant provider).

• Audit mode: All requests generate immutable audit events regardless of policy outcome.

The ability to transition policies from shadow → enforcement incrementally is critical for enterprise adoption. Organizations can validate policy accuracy against real usage before enforcement begins.

Layer 3: Semantic Validation

Semantic validation analyzes AI outputs for policy compliance, even when inputs appeared benign. This catches emergent violations — cases where individually safe inputs produce policy-violating outputs.

Example: A request containing no PII might generate a response containing customer names if the model was trained on internal data. Semantic validation detects this outcome mismatch.

Validation results feed back into policy intent, creating a learning loop that improves policy accuracy over time.

LCAC request processing follows this sequence:

1. Request arrives (user identity, input, intended model/provider) 2. Policy engine evaluates: Does (identity, context, intent) satisfy policy constraints? 3. Enforcement mode determines action: — Shadow: Log violation, allow request, mark for review — Enforce: Block request OR route to compliant alternative 4. If allowed, execute request through provider abstraction layer 5. Semantic validator analyzes response against policy 6. Audit event persists (request, policy evaluation, enforcement decision, outcome)

This flow ensures every request passes through policy evaluation before execution, with full traceability for compliance auditing.

Enterprise environments typically use multiple AI providers (OpenAI, Anthropic, Google, Azure OpenAI, on-premise models). LCAC requires provider abstraction to enforce consistent policy regardless of underlying infrastructure.

Provider abstraction exposes a unified interface: execute(request, policy_context) → (response, audit_event)

Implementations map this interface to provider-specific APIs while maintaining policy enforcement at the abstraction boundary. This ensures policy travels with requests, preventing bypass through direct provider access.

Multi-tenant LCAC deployments (e.g., different departments in an enterprise) require strict isolation:

• Policy namespaces: Department A's policies don't affect Department B's requests • Execution pools: Separate provider quotas/limits per tenant • Audit segregation: Tenant-specific audit trails with access controls

Tenant isolation enables different risk tolerances across an organization — R&D might operate in shadow mode while legal operates in full enforcement.

Regulatory compliance (GDPR, HIPAA, SOX) requires demonstrating what data was processed, by whom, under what authority. LCAC audit events must capture:

• Request metadata: timestamp, identity, tenant, intended provider • Policy evaluation: which policies applied, which were violated • Enforcement decision: allow/block/route, with justification • Execution outcome: which provider actually handled the request • Semantic validation: output analysis results

Audit events must be immutable and tamper-evident to satisfy compliance requirements.

Shadow mode is critical for enterprise adoption. Organizations cannot risk operational disruption from incorrectly configured policies. The recommended deployment workflow:

1. Deploy LCAC in pure shadow mode (all requests allowed, all violations logged) 2. Observe request patterns for 2–4 weeks 3. Refine policies based on false positive/negative analysis 4. Transition high-confidence policies to enforcement 5. Iterate on remaining policies until desired coverage achieved

This incremental approach balances security goals with operational stability.

Comparison of AI Governance Approaches:

Prompt Filtering: • Prevents violations: Partial (easily bypassed) • Adapts to new threats: No (static rules) • Maintains audit trail: Limited • Supports shadow mode: No

Post-Execution Monitoring: • Prevents violations: No (reactive only) • Adapts to new threats: Yes • Maintains audit trail: Yes • Supports shadow mode: N/A

Vendor Trust: • Prevents violations: No • Adapts to new threats: No • Maintains audit trail: Provider-dependent • Supports shadow mode: No

LCAC: • Prevents violations: Yes (pre-execution control) • Adapts to new threats: Yes (policy evolution) • Maintains audit trail: Yes (comprehensive) • Supports shadow mode: Yes (critical differentiator)

LCAC is the only approach that combines prevention, adaptability, comprehensive auditing, and risk-free deployment. LCAC is the only approach that combines prevention, adaptability, comprehensive auditing, and risk-free deployment.

LCAC has several limitations:

• Performance overhead: Pre-execution policy evaluation adds latency. Our prototype implementation adds 15ms per request, acceptable for most enterprise use cases but potentially problematic for latency-sensitive applications.

• Policy complexity: As policy sets grow, managing rule interactions becomes challenging. Future work should explore policy conflict detection and resolution strategies.

• Provider cooperation: LCAC assumes control over the execution path. Organizations using AI through uncontrolled channels (e.g., employees' personal ChatGPT accounts) bypass governance entirely. Complementary network-level controls may be necessary.

Future research directions include: • Automated policy learning from shadow-mode observations • Integration with existing IAM and Zero Trust frameworks • Performance optimization through policy caching and parallelization • Formal verification of policy correctness and completeness

Enterprise AI governance cannot remain reactive. As AI becomes embedded in critical business processes, the cost of policy violations — regulatory fines, data breaches, reputational damage — becomes existential. Current approaches detect violations after damage occurs, a fundamentally inadequate posture.

Least Context Access Control shifts AI governance from detection to prevention through pre-execution policy enforcement. By separating policy intent from enforcement mode, LCAC enables organizations to deploy governance incrementally, validating policies in shadow mode before enforcement begins.

The framework draws on proven patterns from Zero Trust security and network control planes, applying time-tested principles to a new domain. Just as firewalls transformed network security and Zero Trust transformed access control, LCAC has the potential to transform AI governance.

We have presented the theoretical framework, implementation patterns, and deployment strategies necessary for enterprise adoption. The urgent need is not for more research — it is for implementation. Organizations must shift from reactive monitoring to preventive control before the next major AI-related compliance failure forces regulatory intervention.

The code, implementation guides, and reference architectures for LCAC are available at. https://github.com/qstackfield/atomlabs-lcac-framework

AUTHOR INFORMATION

Quinton Stackfield is Co-Founder and Chief Operating Officer at the Institute for AI Transformation and Atom Labs, where he leads development of enterprise AI governance systems.

This work is based on systems described in U.S. Patent Application №63/958,209 (filed January 11, 2026) relating to pre-execution AI governance and multi-tenant control plane architectures.

For questions, collaboration opportunities, or to discuss LCAC implementation: • LinkedIn: linkedin.com/in/qstackfield • Email: info@atomlabs.app • ORCID: https://orcid.org/0009-0002-7377-4165

ACKNOWLEDGMENTS

The author thanks the enterprise AI leaders who provided feedback during the development of this framework, particularly those participating in early design partner discussions.