Operating AI

The Governance of AI Adoption: When Mandates Meet Mental Models

TV
Thiago Victorino
8 min read
The Governance of AI Adoption: When Mandates Meet Mental Models

Google now factors AI use into software engineer performance reviews. Meta tracks lines of code written with AI assistance. Amazon AWS gives managers a dashboard showing how much each engineer uses AI tools, and considers usage in promotion decisions. Microsoft asks about AI use in performance discussions. Salesforce built an “AI fluency progress tracker” and routes PTO requests exclusively through an AI agent.

These are not experiments. They are enforcement mechanisms.

A Section survey from October 2025 found that 42% of tech workers say their manager expects AI use, up from 32% eight months earlier. Nearly half of tech and telecom companies report positive ROI on generative AI, according to a joint Wharton and GBK study. The business case exists. The executive mandate followed. The question now is whether mandates produce the outcomes executives expect.

The Carrots-and-Sticks Era

Conductor, a 300-person startup, built an AI competency score rated 1 through 5 into their performance review system. CEO Seth Besmertnik describes the approach plainly: “We are using carrots and sticks.” Autodesk CEO Andrew Anagnost went further, saying AI holdouts “probably won’t survive long term.”

This language tells you something important about where the industry is. Adoption has moved from encouragement to coercion. The framing shifted from “this could help you” to “this will determine your career trajectory.”

Salesforce’s Joe Inzerillo claims “basically 100%” of their employees use AI. But usage is not proficiency. Compliance is not capability. And the distance between “I opened the tool” and “I changed how I work” is enormous.

Why Mandates Break Down

Kosar Moghanian published a useful framework in February 2026 that explains the cognitive side of this problem. She identifies four layers where human-AI collaboration can fail:

Technology comfort and task stakes. People are more willing to use AI for low-stakes work. The first adoption attempts should target what Moghanian calls “outsource-safe candidates,” tasks where failure is cheap. Mandating AI use across all tasks simultaneously ignores this gradient.

The human’s mental model of the task. Before someone can delegate to an AI, they need to understand their own process well enough to evaluate whether the AI’s output is adequate. Forcing AI on someone who lacks a clear mental model of their own workflow produces confusion, not efficiency.

The human’s mental model of the AI. Traditional software has visible affordances: buttons, menus, dropdowns. Conversational AI has almost none. The user has to guess what the system can do. This is why adoption is uneven even inside the same team with the same tools.

Cognitive capacity versus AI speed. AI generates faster than humans can evaluate. Jakob Nielsen’s principle applies here: “Don’t make me think faster.” When the tool outruns the user’s ability to assess its output, the user either accepts uncritically or abandons the tool. Neither outcome is what the mandate intended.

Research supports this framing. Santos et al. (2015) found that shared mental models between collaborators foster creativity and performance. Walsh et al. (2024) extended this to human-AI teams, finding that mutual mental models are necessary for effective collaboration. The mandate assumes the mental model will develop through exposure. The research says it needs to be built deliberately.

The Taste Problem

Eno Reyes, CEO of Factory (an AI coding agent company, which is relevant commercial context for his claims), frames a related problem: “Humans have great taste in bursts. AI can be designed to have decent taste, constantly.” His argument is that human judgment breaks down under chaos and fatigue, while AI maintains baseline consistency.

There is a kernel of truth here that matters for the mandate discussion. Organizations mandate AI use because they want consistency at scale. They want every engineer, every marketer, every analyst producing at a steady baseline. Human performance varies. AI does not.

But “decent taste, constantly” is not the same as “good judgment.” AI provides consistency without understanding. A human who uses AI well brings judgment to bear on AI’s consistent output. A human who uses AI under mandate, without understanding why or how, just adds a step to their workflow. The consistency gains are real only when paired with the cognitive alignment Moghanian describes.

What Governance Looks Like

Brian Elliott, a future-of-work adviser quoted in the WSJ piece, makes a critical observation: companies built these AI tools, so they need to demonstrate the ROI, not just demand adoption. He is right. But the deeper problem is structural.

Mandates answer the question “will people use AI?” They do not answer “will people use AI well?” The second question is a governance problem.

Here is what organizations enforcing AI adoption should be building instead of (or alongside) usage dashboards:

Skill-matched adoption paths. Not everyone starts at the same place. A senior engineer with deep domain knowledge needs a different onboarding than a junior analyst. The competency score matters less than the progression plan. Conductor’s 1-5 scale measures state. It does not measure trajectory.

Task-appropriate deployment. Some tasks benefit from AI immediately. Others require significant workflow redesign first. Forcing AI into a process that was not designed for it produces friction, not efficiency. Map the workflow before mandating the tool.

Evaluation capacity that matches generation capacity. This connects directly to the verification infrastructure argument. If your team generates 3x more output with AI, but your review process stays the same, you have created a quality bottleneck. The mandate increased volume. Nobody planned for the verification load.

Feedback loops, not just metrics. Salesforce tracks AI fluency. Amazon tracks AI usage. Google factors AI into reviews. These are all lagging indicators. They tell you what happened. They do not tell you whether the AI use improved outcomes, introduced errors, or just added overhead. The governance system needs outcome metrics, not activity metrics.

The Uncomfortable Middle

The enforcement era is here. It is not going away. Companies have invested billions in AI tooling and they expect returns. The executives issuing mandates are not wrong to want adoption.

They are wrong to assume adoption equals value.

The organizations that will extract real value from AI adoption mandates are the ones that treat the mandate as a starting point, not an end state. Usage is table stakes. The hard work is building the cognitive infrastructure (the mental models, the evaluation skills, the workflow redesign) that turns usage into capability.

Telling your engineers to use AI is easy. Building the system that makes AI use productive is governance. Most companies are doing the first and skipping the second.


This essay draws on the Wall Street Journal’s “Tech Firms Aren’t Just Encouraging Their Workers to Use AI. They’re Enforcing It” (February 24, 2026), with cognitive framework elements from Kosar Moghanian’s “Your AI Feature Works. So Why Don’t Users Care?” (February 22, 2026) and supporting research from Santos et al. (2015) and Walsh et al. (2024). Eno Reyes’s observations are noted with the disclosure that he is CEO of Factory, an AI coding agent company with direct commercial interest in expanded AI adoption.

Victorino Group helps organizations build AI adoption programs that produce capability, not just compliance. If your mandate is generating usage numbers but not outcomes, the problem is governance. Let’s talk about it.

If this resonates, let's talk

We help companies implement AI without losing control.

Schedule a Conversation