01 · The Work
What This Work Is
Kevin Moore's work is the development of Decision Governance as an organizational discipline — the runtime authority layer that governs AI-influenced decisions before they produce real consequences. The work is not commentary on AI. It is not technology strategy or deployment advice. It is the definition of a discipline that does not yet have a name in most enterprises.
The question the work is built to answer is specific: when AI participates in a decision, who says that decision was authorized to move forward? Most enterprise AI governance frameworks today cannot answer this. The frameworks that exist were built to manage AI systems — to inventory them, classify their risk, and document applicable policies. None of that answers the authority question at the moment it matters, which is before the decision moves.
Decision Governance closes that gap. Moore is developing the framework — its terms, its architecture, its operational mechanics — through Jochanni Labs and the Decision Governance Strategic Intelligence Series. The work is published openly so any organization that needs it can engage with it directly.
The work covers the specific operational requirements that runtime authority demands: trigger logic that flags AI participation so governance kicks in, authority routing that maps each decision to the right reviewer, execution gating that enforces scope, alignment, and sign-off simultaneously before action, decision recording that captures the authority record before the decision moves, and audit evidence that proves governance was active — not assembled after the fact.
These are not theoretical requirements. They are the operational gaps inside every enterprise AI deployment that lacks a runtime authority layer. The sectors with the highest AI deployment rates — financial services, healthcare, legal operations, insurance, government — are also the sectors with the clearest legal and institutional consequences when ungoverned AI-influenced work produces harm, error, or challenge.
The work is being done now because the moment for defining this category is before regulatory pressure, organizational incident, or competitive disadvantage forces the issue. Categories defined in crisis are defined poorly. This work is an attempt to define this one correctly.
02 · Research Thesis
The Execution Gap
The Execution Gap is the space that opens when enterprises deploy AI without building the authority layer that determines what those systems are allowed to do. It is not a technology failure. It is an organizational design failure.
Every serious organizational decision carries authority requirements. These requirements exist in policy, in control frameworks, in approval hierarchies, in escalation protocols. They exist because organizations have learned, at cost, that consequential work cannot depend on informal confidence. The same lesson has not been applied to AI-influenced decisions — not because the lesson is wrong, but because the authority structure required to apply it at machine speed does not yet exist in most enterprises.
The Execution Gap is the name Moore gave this void. AI systems produce output. Organizations act on that output. Between production and action, no active governing structure determines whether the decision was authorized to proceed. An audit after the fact can explain what happened. It cannot authorize what already moved.
Closing the Execution Gap requires building the authority structure before AI-influenced work reaches the execution point — not documenting policies that apply in theory, but creating the active layer that enforces authority requirements in practice. That is what Decision Governance addresses, and what DAL-X is the emerging framework for.
What the gap is not
A technology failure. A model performance problem. A risk classification gap. A policy documentation shortage. These are adjacent problems. They are not the Execution Gap.
What the gap is
The absence of a runtime authority structure at the point where AI-influenced work is about to move. The organizational design failure of governing AI at the policy level rather than at the execution point.
03 · The Discipline
Decision Governance
Decision Governance is the discipline of establishing and enforcing authority over AI-influenced decisions before they produce real consequences. Moore's work defines it as a distinct organizational category — separate from model governance, risk management, compliance preparation, and AI ethics.
The distinction is structural. Model governance addresses the AI system. Risk management addresses the threat landscape. Compliance preparation addresses regulatory alignment. Decision Governance addresses the authority question at the execution point: was this decision authorized to move forward, under whose authority, with what evidence, and with a record that proves governance was active before it happened.
Moore frames Decision Governance around three requirements that must be in place before a decision moves. Scope: the AI was authorized to be involved in this type of decision. Alignment: the decision follows the rules the organization has established. Sign-Off: the right person — with the authority level this decision requires — has reviewed it and taken ownership. All three must be active simultaneously. Two of three is not partial governance — it produces the appearance of control without the substance.
Decision Governance
The discipline of establishing and enforcing authority over AI-influenced decisions before they produce real consequences.
Distinguished From
- Model Governance Governs the AI system, not the decision
- Risk Management Addresses threat landscape, not execution authority
- Compliance Preparation Aligns to regulation, does not enforce runtime authority
- AI Ethics Addresses values, not operational authority enforcement
04 · Publication
Decision Governance Strategic Intelligence Series
The Decision Governance Strategic Intelligence Series is the primary publication through which Moore is developing and releasing the Decision Governance framework. Twenty briefings, organized across four phases, building the complete framework from first principles through deployment reality and category defense. Each briefing is a substantive piece of framework development — not commentary, not opinion, not news.
The Series is published by Jochanni Labs. It is open access — available to any organization that needs it. The decision to publish the framework openly is itself an architectural one: category definition requires that the vocabulary, the concepts, and the operational requirements be available before the market attempts to fill the category with insufficient substitutes.
20
Briefings
4
Phases
~80k
Words
1
Discipline Defined
Phase 1
Category Establishment
Briefings 1 – 5
Establishes Decision Governance as a distinct organizational discipline — defining the Execution Gap, the structural void left by absent governance, and the role of the runtime authority principal.
Phase 2
Operational Control Mechanics
Briefings 6 – 12
Specifies the operational architecture of Decision Governance: the three control points, scope containment, alignment verification, sign-off layers, governance velocity, and the failure modes that break these systems under load.
Phase 3
Deployment Reality
Briefings 13 – 17
Addresses the conditions organizations encounter when deploying Decision Governance — organizational resistance, principal hierarchy conflicts, sector-specific constraints, vendor opacity, and scale degradation.
Phase 4
Category Defense and Inevitability
Briefings 18 – 20
Positions runtime authority as structurally inevitable — driven by regulatory convergence, competitive pressure, and the compounding organizational risk created by AI systems operating without governance infrastructure.
05 · Institution
Jochanni Labs
Moore founded Jochanni Labs as the research institution through which the framework for Decision Governance is developed. The Labs is not a consultancy. It is not a product company. It is the research organization that holds and publishes the definitive work on the runtime authority discipline.
The institutional form of Jochanni Labs — a research lab rather than a vendor, a consultancy, or a startup — is deliberate. Category-defining work requires intellectual credibility and independence. The work needs to be seen as framework development, not as sales infrastructure for a product that happens to use the same vocabulary.
The Decision Governance Strategic Intelligence Series is the primary output of Jochanni Labs. Additional research, framework extensions, and sector-specific applications of Decision Governance will be published through the same institutional channel as the discipline develops.
Jochanni Labs
06 · Category Formation
Why This Work Is Being Done Now
Categories in organizational management and technology governance are typically defined in one of two ways: ahead of necessity, through deliberate intellectual development, or in reaction to crisis, when the absence of a category has produced consequences severe enough to force naming. The former produces better categories. The latter produces categories shaped by the incidents that forced them into existence — which means the vocabulary reflects the failure mode rather than the structural requirement.
Decision Governance is being defined ahead of the crisis that would otherwise define it. The incidents that will force enterprises to confront the Execution Gap have not yet occurred at scale — though the conditions for them are being created by every AI system deployed without a runtime authority layer. When those incidents occur, the organizations that have engaged with Decision Governance as a framework will be better positioned to respond. The organizations that have not will be defining governance in the aftermath.
Moore's work is positioned explicitly as category-originating research. The Decision Governance Strategic Intelligence Series is not responding to an existing category — it is building one. The terms, the operational requirements, the architectural concepts, and the governance disciplines the Series defines are intended to be the reference framework for Decision Governance as the discipline develops and as organizations that need it begin building the authority structures it requires.
"The organizations that engage with this framework now are building the governance foundation that the next phase of AI-assisted enterprise operations will require. Runtime authority is not optional infrastructure. It is the layer that makes AI-assisted execution governable."
— Decision Governance Strategic Intelligence Series, Briefing 07