Strategic Intelligence Series

Runtime Authority Framework

DAL-X

Kevin Moore  ·  Jochanni Labs

Most enterprise AI deployments have a problem: AI produces output, someone acts on it, and nothing in between confirms the decision was authorized. DAL-X is the framework being built to close that gap.

DAL-X

Decision Authority Layer

Runtime Execution Control

Jochanni Labs

01  ·  Definition

What DAL-X Means

DAL-X stands for Decision Authority Layer. The X marks the execution point — the moment at which authority must be active, or the decision is ungoverned.

DAL-X is not a product category, a compliance framework, or a monitoring dashboard. It is the foundational work for a discipline most enterprises do not yet have a name for: the authority layer that sits between AI output and the decisions that follow.

Kevin Moore and Jochanni Labs are developing DAL-X as the primary research initiative defining this discipline — the terms, the architecture, and the requirements — before regulatory pressure or an organizational incident makes the work unavoidable.

D
Decision The governed object — not the model
A
Authority Named, mapped, enforced before action
L
Layer Active infrastructure, not policy document
X
Execution Point The moment governance must be present

02  ·  Origin

Why DAL-X Exists

AI systems are already participating in consequential organizational decisions. They write contracts, classify risk, route customers, recommend exceptions, and approve actions — increasingly at machine speed, inside workflows that were designed for humans. The authority structures governing those decisions have not kept up.

Every enterprise has formal authority requirements for its most consequential decisions: who can approve what, at what level, with what documentation. These requirements exist in policy documents and audit protocols. What most enterprises do not have is the active layer that enforces those requirements at the moment AI-influenced work is about to move.

The problem is not a lack of policies. Policies without active enforcement are intent, not governance. DAL-X names the absent layer and defines what it takes to close the gap.

Without Runtime Authority

AI Output
authority gap
Human Action
Consequence

AI output moves directly toward action. Nothing in the chain confirms the decision was authorized before it happened.

With DAL-X

AI Output
Runtime Authority
Authorized Action

Authority is active before the decision moves. Scope, alignment, and sign-off must all be confirmed first.

03  ·  Architecture

The Runtime Authority Problem

The runtime authority problem is straightforward. AI output enters an organization's workflow. From there it moves toward action — through human review, automated routing, manager approval, or direct execution. At every step, the question of whether the AI's involvement was authorized goes unasked and unanswered.

Runtime authority does not mean slowing every decision. It means that where AI participates, the organization has defined and is actively enforcing the authority requirements at the moment a decision takes effect. Scope: was AI authorized to be here? Alignment: does this decision follow the rules in place? Sign-Off: has the right person accepted responsibility?

AI Output

Recommendation
Classification
Action proposal

Trigger Logic

Consequence level
AI participation flag
Risk threshold

Authority Routing

Reviewer mapping
Level match
Escalation path

Execution Gate

Scope
Alignment
Sign-Off

Decision Record

Before action
Evidence capture
Authority stamp

Authorized Action

Governed
Traceable
Defensible

The DAL-X authority chain — AI output through runtime governance to authorized action. Each node is a required layer; removing any one breaks the chain.

04  ·  Mechanism

Trigger Logic

Trigger Condition Evaluation

AI participation detected in decision path FIRES
Consequence level at or above defined threshold FIRES
Risk classification requires supervisory review FIRES
Authority level requirement present FIRES
No AI participation · Below consequence threshold PASS
Governance Active — Authority Routing Engaged

Trigger logic is the mechanism that flags AI participation so governance kicks in. Without it, AI output enters the workflow silently — no different from any other input, carrying no signal that authority requirements apply.

A trigger fires when a defined condition is met: the stakes exceed a threshold, AI involvement is detected, the risk level requires escalation, or the decision falls into a domain with specific authority requirements. When the trigger fires, governance is no longer a policy on paper — it becomes an active requirement before the decision can proceed.

Trigger logic is the boundary between AI governance as policy and AI governance as enforcement.

05  ·  Mechanism

Authority Routing

Authority routing maps each triggered decision to the right reviewer — the person or body with the actual authority level required to own that decision at its specific stakes.

The routing is specific. A decision requiring senior compliance review cannot go to a department manager. A decision at regulatory exposure levels cannot be approved by someone whose authority does not reach that high. The routing matches the decision to the person, not the other way around.

Authority routing is what turns human oversight into an actual control. Most human-in-the-loop implementations do not satisfy this requirement. They have a human. They do not have the right human, at the right level, for the right decision.

Authority Level Map

Executive Authority
High consequence · Regulatory exposure
Supervisory Authority
Elevated consequence · Policy exception
Operational Authority
Standard consequence · Defined scope
Automated Path
Below threshold · Pre-authorized scope

Consequence level determines routing tier. Misrouting is a governance failure — not a process error.

06  ·  Mechanism

Execution Gating

Execution Gate — Three Concurrent Requirements

Scope

The AI was authorized to be involved in this type of decision. It did not exceed what it was permitted to do.

Alignment

The decision follows the rules the organization has established — the policies, thresholds, and review requirements currently in force.

Sign-Off

The right person — with the authority level this decision requires — has reviewed it and taken ownership before it moves.

Gate Open — Execution Authorized

The execution gate is the point at which all three requirements must be satisfied at the same time before a decision moves. Scope is confirmed, alignment is validated, and sign-off is recorded from the right person.

The gate is not a sequential checklist. All three requirements must be active at the same moment. A decision that clears scope and alignment but lacks sign-off has not passed the gate. A decision where the person who signed off lacked the authority for the stakes involved has not passed the gate either.

Execution gating is what converts policy into enforcement. The gate does not close because a policy exists. It closes only when the specific requirements for this specific decision have been satisfied.

07  ·  Mechanism

Decision Recording

Decision recording creates the authority record before execution moves. The record is not a log created after the fact. It is proof — documenting what authority was in place, who accepted responsibility, what requirements were met, and when all of that was confirmed, before the decision moved.

There is a real difference between recording authority before a decision moves and logging what happened after. A record made after the fact can explain events. It cannot prove the decision was authorized while there was still a chance to stop it. Decision recording makes that proof available because it happens first.

The record proves governance was active before — not assembled after a challenge. When a decision is questioned by an auditor, a customer, a regulator, or a court, the record is what shows it was authorized when it counted.

Pre-Execution Decision Record

Decision Path ID DG-2026-04871
AI Participation Flagged · Classification model
Consequence Level Tier 2 — Supervisory review required
Scope Status Confirmed within boundaries
Alignment Status Rules and policies satisfied
Authorized Sign-Off Supervisory authority — confirmed
Record Captured Before execution
Authorization Status AUTHORIZED

08  ·  Mechanism

Audit Evidence

Audit evidence is the record that proves governance was working before anything went wrong. It answers the questions any challenge will produce: Was AI involved in this decision? Was that involvement authorized? Who reviewed it and under what authority? Were the requirements satisfied? Was all of this documented before the decision moved — or assembled afterward?

There is a real difference between instrumentation and evidence. Instrumentation records what happened. Evidence proves what was authorized. An organization with monitoring tools but no authority record created before decisions move knows more about what went wrong than it can prove about what was permitted.

01

AI Participation Record

Which system participated, at which step, with what output

02

Trigger Event Log

What condition fired, at what threshold, with what timestamp

03

Sign-Off Authority Record

Who reviewed, at what authority level, with what mandate

04

Gate Clearance Record

Scope, alignment, and sign-off — all confirmed before the decision moved

05

Pre-Execution Timestamp

The proof point — evidence that the record precedes the action

09  ·  Application

AI-Influenced Workflows

DAL-X applies wherever AI participates in a decision that carries real consequences — whether AI is acting directly or shaping a human decision. Consequences attach to the decision, not to whether a human was technically in the process.

Two workflow types come up in practice. When AI is acting autonomously — executing tasks directly — it needs a gate that stops it before any consequential action without a confirmed authority structure in place. When a human is in the lead but AI is shaping the inputs, the human reviewer needs to hold the authority level that the AI's involvement creates — not just the level required for the same decision made without AI.

Autonomous AI Workflow

AI takes direct action within defined scope

1 AI receives task within assigned scope
2 Trigger evaluates the stakes of the proposed action
3 Authority routing activates if threshold is met
4 Execution gate requires scope, alignment, and sign-off
5 Authority record captured — action authorized to proceed

Human-Led AI-Assisted Workflow

AI shapes the inputs; a person acts on them

1 AI produces recommendation, classification, or exception path
2 Trigger flags AI involvement and the stakes involved
3 Authority routing maps to reviewer with matching authority level
4 Reviewer evaluates with required context, information, and clear mandate
5 Sign-off recorded — decision authorized before action is taken

"The runtime authority problem is not a technology problem. It is an organizational design problem — the gap that opens when enterprises deploy AI without building the layer that determines what those systems are allowed to do."

— DAL-X Framework  ·  Jochanni Labs