The AI Control Problem

Domain Expertise Still Wanted: The AI Trust Gap Is Not Closing

TV
Thiago Victorino
9 min read
Domain Expertise Still Wanted: The AI Trust Gap Is Not Closing

Stack Overflow surveyed nearly 900 developers in February 2026, in partnership with OpenAI, about how they learn and work with AI. The headline finding is that AI adoption continues to accelerate: 64% now use AI to learn, up from 44% in 2025 and 37% in 2024. Daily AI use at work rose from 47% to 58% in one year.

The finding that matters is the one buried in the cross-referencing data: only 1% of developers use AI alone. The other 99% check its output against something else. 58% cross-reference AI with technical documentation. 54% use other online resources. 50% verify against Stack Overflow. Developers have not replaced their existing knowledge infrastructure. They have added a verification layer on top of it.

This is the AI tax applied to learning. Not a line item. Not a budget category. A behavioral adaptation that developers built for themselves because their organizations did not build it for them.

The Experience Gradient

The survey reveals a clean gradient between experience level and trust behavior that organizations should find uncomfortable.

Early-career developers (68% daily AI use) turn to AI first 36% of the time. Mid-career developers (59% daily) turn to AI first 39% of the time. Experienced developers (56% daily) are at near-parity: 29% go to AI first, 30% go to technical documentation first.

The pattern is precise. As developers accumulate domain expertise, they become less willing to trust AI as a starting point. Not because they use it less — experienced developers still use AI daily at majority rates. Because they have learned, through repeated exposure, where AI output fails.

This is the same seniority asymmetry we documented in The AI Verification Debt. Senior developers spend 4.3 minutes reviewing each AI suggestion. Junior developers spend 1.2 minutes. The Stack Overflow data adds context: senior developers are not just reviewing more carefully. They are structurally less willing to start from AI output in the first place. They route around the trust problem by choosing a more reliable starting point.

The organizational implication is direct. Your most experienced engineers — the ones whose judgment you depend on for architectural decisions, security reviews, and production reliability — are the ones least willing to accept AI output at face value. They are telling you something about the output quality. The question is whether your organization is listening.

Trust Is Declining, Not Improving

Stack Overflow reports that trust in AI declined between their 2024 and 2025 Developer Surveys. The 2026 pulse survey shows the pattern holding: 38% of respondents cite “lack of trust in the results” as the primary barrier to using AI for learning. Among weekly users, the number rises to 47%.

This contradicts the prevailing narrative that trust improves with familiarity. Daily users do report somewhat higher trust (49% in 2025 data, versus 30% for weekly users). But the direction of the aggregate trend is downward, not upward. More people are using AI. Fewer people trust it. The gap between adoption and confidence is widening.

The trust gap matters because it is not irrational. As we explored in The Verification Tax, the Foxit study found executives report saving 4.6 hours per week with AI while spending 4.3 hours verifying the output. Workers report a net loss of 14 minutes per week. The developers in Stack Overflow’s survey are exhibiting the same rational response: use the tool, but verify everything it produces.

Jessica Talisman, an information architect cited in the Stack Overflow analysis, identifies the structural issue precisely: LLMs “mimic the documentary chain of citations and footnotes without satisfying its duty in maintaining provenance.” The output looks authoritative. It lacks the verification chain that makes authority earned rather than performed.

The Consolidation That Is Not a Replacement

One data point in the Stack Overflow survey deserves more attention than it will receive.

In 2024, 49% of developers used eight or more learning resources. In 2025, that dropped to 9%. In 2026, 7%. Developers are consolidating their tool stacks. Fewer resources. More focus. But — and this is the critical detail — the consolidation is not an AI-driven replacement. It is happening among both AI users and non-AI users at similar rates.

What is happening is not “AI replacing documentation.” It is developers simplifying their workflows while maintaining verification loops. AI becomes one input. Documentation becomes the check. Stack Overflow becomes the tie-breaker. The ecosystem is not collapsing into AI. It is reorganizing around AI with human-curated knowledge as the validation layer.

This is what the “AI tax” looks like when developers design it themselves. Not a governance framework mandated by management. Not a verification checklist from a compliance team. A bottom-up behavioral pattern where every developer independently concluded that AI output requires cross-referencing before it can be trusted.

The problem with bottom-up solutions is that they are invisible to the organization. No dashboard tracks cross-referencing behavior. No sprint plan budgets time for “verifying what the AI told me.” No productivity metric accounts for the 99% of developers who do not trust AI output enough to use it alone.

The Agentic Skepticism Signal

The Stack Overflow survey includes a section on agentic AI that reads as a warning.

When asked whether they would let an AI agent represent them in a job search, 27.6% said “definitely not.” 23.8% said they would only if conditions were met. The top conditions: human intervention available at all steps (46%) and transparent data usage (44%).

This is the trust gap extending beyond output quality into operational autonomy. Developers who use AI daily, who have integrated it into their learning workflows, who acknowledge it is getting better — these same developers refuse to grant AI agents unsupervised authority over their professional interests.

The implication for organizations deploying agentic AI is clear. The people closest to AI tools are the most cautious about expanding AI autonomy. As we argued in The Architecture of Agent Trust, trustworthy agents require structural boundaries, not just better models. The Stack Overflow data shows that the people who understand AI best agree with that assessment.

What the Trust Gap Demands

The persistence of the trust gap across three years of Stack Overflow data, despite massive adoption growth, establishes something important: the gap is not a transition state. It is a structural feature of how AI produces output.

AI generates plausible text without provenance. Developers verify plausible text against authoritative sources. This loop does not get faster with better models. It gets faster with better verification infrastructure.

Three implications for organizations.

Budget for the tax. If 99% of your developers cross-reference AI output, that cross-referencing time is a real cost. Measure it. Include it in project plans. Stop celebrating AI adoption metrics that ignore the verification cost that follows every adoption.

Invest in the check, not just the generation. The 58% of developers using technical documentation alongside AI are telling you what they need: fast access to authoritative, curated, provenance-backed knowledge. Every dollar spent making documentation faster, more searchable, and more current reduces the AI tax more effectively than a better model.

Respect the experience gradient. Your senior developers’ reluctance to trust AI output is not technophobia. It is calibrated judgment built on years of pattern recognition. Design your AI governance around their standards, not around the enthusiasm of developers who have not yet learned where the failures hide.

The Stack Overflow data is a census of developers adapting to AI on their own terms. They adopted the tools. They did not adopt the trust. Until organizations build the verification infrastructure that matches what developers already do informally, the AI trust gap will persist — not as a problem to solve, but as a tax to pay.


Sources

  • Stack Overflow. “Domain Expertise Still Wanted: The Latest Trends in AI.” stackoverflow.blog, March 16, 2026.
  • Stack Overflow. “2025 Developer Survey.” stackoverflow.co/survey/2025.
  • Stack Overflow. “2024 Developer Survey.” stackoverflow.co/survey/2024.
  • Foxit. “State of Document Intelligence.” March 2026.
  • Jessica Talisman. “Where Provenance Ends, Knowledge Decays.” Substack.

Victorino Group helps engineering organizations build verification infrastructure that turns informal developer workarounds into systematic governance. The trust gap is real. The tax is measurable. Let’s measure it.

If this resonates, let's talk

We help companies implement AI without losing control.

Schedule a Conversation