Core Position
Model drift focuses on whether the AI system changes. Authority drift focuses on whether the decision path changes faster than the enterprise control model can govern it.
Enterprise AI risk is often discussed through model drift. The concept is useful, but it is incomplete. Model drift asks whether model behavior, performance, inputs, or outputs have shifted over time. Authority drift asks a different question: has the AI influenced decision path started moving outside the authority structure the enterprise believes still governs it?
This is the next gap in enterprise AI control. A model can continue producing acceptable outputs while the decision process around those outputs weakens. Review can become casual. Escalation can be skipped. Approval rights can be stretched. Overrides can become routine. Teams can start relying on AI generated classifications, summaries, rankings, or recommendations in ways the original governance model did not authorize.
Model drift may be visible in metrics. Authority drift often appears in behavior before it appears in metrics.
NIST’s AI Risk Management Framework describes AI systems as sociotechnical systems and recognizes that AI risks can emerge from technical factors combined with how a system is used, who operates it, and the social context where it is deployed. This framing confirms a practical control reality: AI risk does not live only inside the model. It also lives inside the operating environment around the model.1
Authority drift sits inside that operating environment. It begins when the institution’s formal authority model and the actual decision behavior around AI start separating. The policy may still say a specific manager approves a decision. The workflow may still show a review step. The governance record may still name an owner. The live decision path may already be operating differently because AI output has become trusted, accelerated, normalized, or quietly treated as the real decision anchor.
A credit team may still require human approval for a lending decision, while the human reviewer begins treating the AI generated risk tier as the default answer. A hiring process may still require recruiter judgment, while the AI ranking becomes the practical screen. A claims process may still preserve supervisory review, while the AI severity score dictates how the file moves. The control model may look intact from a policy view while authority has already migrated toward the machine influenced path.
MICRO EXAMPLE: |
This is why drift monitoring cannot remain limited to model behavior. The enterprise also needs to monitor the decision path. Who is relying on the output? Which approval rights changed in practice? Which review steps are being compressed? Which escalation conditions are being ignored? Which overrides are repeated? Which teams are treating AI output as more authoritative than the approved control model allows?
The EU AI Act requires post market monitoring for high risk AI systems under Article 72, with providers establishing a system to collect, document, and analyze relevant data on performance throughout the lifetime of the system. This requirement is aimed at high risk systems, while the operating principle is broader for enterprise control: AI does not become safe just because it passed an initial review. Control has to continue after deployment because use patterns, decision reliance, and institutional behavior can change.2
Authority drift can happen without a formal system change. No new model release is required. No new workflow screen is required. No executive memo is required. A team only has to begin treating AI output as the practical decision authority while the governance record still claims human authority remains intact. This gap preserves the appearance of control while changing the substance of control.
Decision Intelligence gives this issue a stronger frame. A decision is not just a recommendation or output. It carries context, judgment, consequence, execution, feedback, and accountability. If AI begins changing the judgment path, the escalation path, or the approval path, the decision has changed even if the model has not. A mature governance program has to see that movement.
The OECD AI Principles connect accountability to traceability across datasets, processes, and decisions made during the AI system lifecycle. This traceability language is relevant because authority drift is not visible unless the enterprise can trace how AI participation affected the decision path. The organization needs to know which AI influenced input was used, who relied on it, what authority condition applied, which escalation path was available, and why the action was allowed to proceed.3
Authority drift also explains why dashboards alone can give false comfort. A dashboard may show model performance within tolerance while decision behavior is weakening around the edges. A risk score may stay stable while reviewers stop challenging the output. A workflow may show approvals while the approver no longer exercises independent judgment. A compliance report may show completion while authority has shifted from accountable human judgment to machine shaped momentum.
This is not an argument against AI. Strong AI systems can improve speed, consistency, and operational awareness. The point is that enterprise authority cannot be allowed to migrate silently. If AI output starts carrying decision weight, the institution has to decide whether that weight is permitted, under which condition, with whose approval, and with what record.
A Decision Governance program should treat authority drift as a first class control risk. The enterprise should monitor whether AI participation is changing who decides, how decisions move, which controls fire, which reviewers intervene, which overrides repeat, and which evidence proves the decision path remained authorized. Model governance remains necessary. Decision Governance closes the gap that model governance does not fully reach.
The next control frontier is not only detecting whether the model changed. It is detecting whether authority moved without permission.
CATEGORY CLAIM: |
Source Notes
1. NIST AI Risk Management Framework. NIST describes AI systems as sociotechnical systems and notes that AI risks can emerge from technical aspects combined with how a system is used, who operates it, and the social context where it is deployed. Source: https://nvlpubs.nist.gov/nistpubs/ai/nist.ai.100-1.pdf
2. EU AI Act, Article 72. Article 72 requires providers of high risk AI systems to establish and document a post market monitoring system that collects, documents, and analyzes relevant data on performance throughout the system lifetime. Source: https://artificialintelligenceact.eu/article/72/
3. OECD AI Principles. Accountability includes traceability in relation to datasets, processes, and decisions made during the AI system lifecycle. Source: https://www.oecd.org/en/topics/sub-issues/ai-principles.html
Prepared for: Kevin Moore, Founder, Jochanni Labs
Publication series: Decision Governance Strategic Intelligence Series