The AI Control Problem

The Identity Problem: Why Developers Resist AI Tools and What It Actually Means

TV
Thiago Victorino
9 min read

In February 2026, Dave Gauer published an essay called “A Programmer’s Loss of Identity.” It is a grief letter. Not for a job — Gauer still works as a programmer. The grief is for a culture. The culture of people who cared about type systems, language design, and the aesthetics of abstraction. Who built identities around the craft of making computers do things well, not just making them do things.

Gauer applies Samuel Bagg’s social identity theory to explain why the AI boom feels different from previous technological shifts. His argument: programming was never just a skill. It was a social identity, with its own epistemic network — shared standards for what counts as knowledge, who counts as an authority, and how to evaluate competing claims. The AI wave did not just change the tools. It fractured the identity itself.

The tech industry’s response has been to treat developer resistance to AI tools as a training problem. Learn the tools or get left behind. This framing is wrong in a way that has material consequences for organizations.

What Social Identity Theory Actually Predicts

Bagg’s theory, which Gauer adapts for programmers, makes a specific claim: social identities do not just shape preferences. They shape epistemics — what people believe, who they trust, and how they evaluate evidence.

For decades, the “computer programmer” identity was coherent enough to function as an epistemic community. Programmers disagreed on languages, paradigms, and methodologies — sometimes violently — but they shared a common frame: the value of deterministic reasoning about code. Whether you preferred Haskell or Python, static types or dynamic, object-oriented or functional, the shared assumption was that understanding the machine mattered. That abstractions should be principled. That good code had qualities beyond “it works.”

AI tools did not extend this frame. They replaced it with a different one: output matters, process does not.

When a junior developer uses an LLM to generate a working function without understanding how it works, they are operating in a different epistemic frame from the senior developer who would have built the same function through deliberate reasoning about types, edge cases, and invariants. Both produce working code. But they disagree on what “knowing how to program” means.

This is not a generational divide about tool preferences. It is a fracture in the epistemic network that determines how an entire professional community evaluates competence, trust, and truth.

The Data Says the Fracture Is Real

The Stack Overflow 2025 Developer Survey — 49,000 developers — provides the quantitative evidence for what Gauer describes qualitatively.

In 2023, 70% of developers expressed positive sentiment toward AI coding tools. By 2025, that number dropped to 60%. Trust in AI-generated code accuracy fell from 40% to 29%. Two-thirds of developers — 66% — cited the “almost right” problem: AI output that looks correct but is not.

The adoption numbers tell the other half of the story. 84% of developers report using AI tools. So adoption is near-universal. Trust is declining. These two facts together describe something more complex than resistance. They describe a community that is using tools it does not trust.

METR’s 2025 randomized controlled trial adds a sharper edge. Experienced open-source developers — people with deep context on specific codebases — were 19% slower with AI tools. They reported believing they were approximately 20% faster. The perception gap is striking. The tool actively slowed down the developers with the deepest craft knowledge, and they did not notice.

This is precisely what identity theory predicts. When your professional identity is built around craft mastery, and the dominant narrative insists that the new tools make everyone more productive, you integrate the narrative even when your experience contradicts it. The social pressure to adopt is so strong that it overrides direct empirical evidence.

Deterministic vs. Stochastic: A Categorical Difference

There is a technical argument buried inside the identity crisis that organizations routinely miss.

Traditional programming abstractions — compilers, type systems, formal verification, static analysis — are deterministic. Given the same input, they produce the same output. Their behavior can be reasoned about, proved, audited. When a compiler transforms your source code into machine code, you can inspect the transformation. When a type system rejects your program, you can read the error message and understand why.

LLM-generated code is stochastic. Given the same prompt, the model may produce different output. The reasoning is opaque. The “transformation” from intent to code passes through a statistical process that no human — and no audit tool — can fully inspect.

This is not an aesthetic preference. It is a categorical difference in governability.

When craft-oriented developers object to AI-generated code, they are often making this point in the language of taste: “it’s ugly,” “it’s not how I would write it,” “it doesn’t feel right.” Organizations hear aesthetics and dismiss the concern. But the underlying objection is epistemological. Deterministic abstractions are auditable. Stochastic output is not. The developer who insists on understanding the code is not being precious. They are maintaining the only reliable mechanism for quality assurance that software engineering has ever had: a human who understands what the code does and why.

Seventy-two percent of developers in the Stack Overflow survey said “vibe coding” — writing code by feel, using AI to generate implementations without deep understanding — is not part of their professional work. This is not conservatism. It is a professional community drawing a line around what counts as engineering.

The Withdrawal Problem

Gauer coins the term “heirloom programmer” for developers who value craft itself — the people who care about programming languages the way an oenologist cares about terroir. It maps to the broader “artisanal coding” movement visible in communities like the Handmade Network and the Zig programming language community.

But Gauer’s more important observation is about what happens when these developers lose their professional identity. They do not fight. They withdraw.

This withdrawal is the data point that organizations consistently miss. When your most experienced engineers go quiet — stop contributing to architecture discussions, stop mentoring juniors, stop pushing back on technical decisions — it does not look like resistance. It looks like agreement. The silence is misread as adoption.

Stanford’s labor data provides the structural context. Employment for junior developers aged 22-25 has fallen approximately 20% since 2022. The entry-level pipeline is contracting. At the same time, the experienced developers who could mentor the remaining juniors are disengaging from the craft culture that made mentorship meaningful.

The compounding effect is severe. Junior developers enter the profession with AI tools as their primary interface to code. Senior developers who understand the systems those tools operate on withdraw from the communities that would transfer that knowledge. The institutional knowledge that makes software engineering possible — not the syntax, but the judgment — erodes from both ends.

Fear-Based Adoption Is Governance Failure

Deloitte’s Tech Trends 2026 report contains a statistic that should alarm every CTO: 93% of organizational AI investment goes to technology. Seven percent goes to people.

This ratio reveals the assumption behind most enterprise AI adoption strategies. The technology is the value. The people are the cost. Train them to use the tools, or replace them with people who already know how.

The fear-based adoption playbook — “learn AI or become obsolete” — produces a specific organizational outcome. It creates compliance without commitment. Developers use the tools because they are told to. They do not trust the output because the output is not trustworthy — they have the data to prove it. They stop raising quality concerns because the organizational culture has defined those concerns as resistance. They continue to produce work, but they stop investing their judgment in it.

This is not hypothetical. Ninety-two percent of engineering leaders already say that AI increases the blast radius of bad code. The tools are generating more code, faster, with less human review. The humans who would have caught the errors are either gone (junior attrition), disengaged (senior withdrawal), or silent (fear-based compliance).

The result is an organization that is shipping faster and understanding less. The velocity metrics improve. The defect rates climb. And the most experienced people — the ones who could explain why — have already left the conversation.

What This Means for Governance

The standard framing of developer AI resistance treats it as a change management problem. Provide training. Show productivity gains. Address concerns through communication. Wait for the holdouts to come around.

This framing fails because it misdiagnoses the problem. The resistance is not about tools. It is about identity. And identity crises do not respond to training programs.

What they respond to is institutional recognition that the identity has value.

Governance is the mechanism for that recognition. Not governance as compliance theater — annual reviews, checkbox audits, policy documents that no one reads. Governance as infrastructure. The systems that determine what code gets reviewed, how deeply, by whom, and against what standards.

When an organization builds governance infrastructure that requires human review of AI-generated code, it is making an institutional statement: the judgment of experienced engineers matters. When it creates quality gates that catch the “almost right” problem before it reaches production, it is validating the craft-oriented concern that stochastic output requires deterministic verification. When it measures engineering effectiveness by outcomes rather than velocity, it creates space for the kind of deliberate work that craft-oriented developers do best.

This is not about being nice to grumpy senior engineers. It is about preserving the institutional capability that makes software work.

The Craft-Governance Synthesis

Kent Beck — the inventor of Extreme Programming and test-driven development — published an essay called “Pinhole View” in February 2026. His argument: AI changes the economics of programming but does not eliminate the need for craft. The analogy is photography. When cameras became cheap, they did not eliminate the need for good photographers. They eliminated the need for everyone to understand photographic chemistry.

Beck’s framing is useful because it suggests a resolution to the identity crisis that is neither nostalgia nor capitulation. Craft and AI tools are not in opposition. Craft is what makes AI tools governable.

A developer who understands type systems can evaluate whether AI-generated code violates type invariants. A developer who understands distributed systems can identify when an AI-generated microservice architecture will fail under load. A developer who understands security can catch the vulnerabilities that Veracode’s 2025 study found in 40-48% of AI-generated code across 100+ LLMs.

The identity crisis resolves when organizations stop defining developer value as productivity — lines of code, tickets closed, velocity points — and start defining it as judgment. Judgment about what the code should do. Judgment about whether it does what it claims. Judgment about what happens when it fails.

This requires a deliberate restructuring of how developer identity is defined within teams. Not “developer who uses AI tools.” Not “developer who resists AI tools.” But “developer whose craft knowledge makes AI output trustworthy.” Craft plus governance, not craft versus automation.

What a Thoughtful Organization Does Now

Stop treating resistance as a training gap. If your most experienced developers are skeptical of AI tools, ask what they see that the productivity metrics do not capture. Their skepticism is signal, not noise.

Audit your verification infrastructure. 84% of developers use AI tools. 29% trust the output. The gap between those numbers is your verification problem. Every piece of AI-generated code that ships without human review is a bet that the 71% distrust rate is wrong.

Invest in people, not just technology. The 93/7 split between technology and people investment is a governance failure. Governance infrastructure is built by people who understand both the technology and the organizational context. You cannot automate that understanding.

Redefine what you measure. If your engineering metrics reward velocity over judgment, you are systematically selecting against the developers who would catch AI-generated errors. Measure what matters: defect rates, incident frequency, code review depth, architectural coherence.

Build identity, not just capability. The developers who stay, who mentor, who invest their judgment in your codebase — they do so because they identify with the work. When you reduce their role to “AI tool operator,” you destroy the identity that makes their contribution valuable.

The Real Stakes

The AI tools are not going away. They should not go away. They are useful, and they will get more useful.

But usefulness without governance is liability. And governance without the people who understand what they are governing is theater.

Developer resistance to AI tools is not a problem to be solved. It is a diagnostic signal. It tells you that the people who understand your systems most deeply do not trust the output of the tools you are asking them to rely on. That information is worth more than any productivity benchmark.

The organizations that treat developer identity as an asset — that build governance infrastructure which gives craft-oriented developers a meaningful role in the AI-augmented workflow — will retain their best people and ship reliable software.

The organizations that treat it as an obstacle will ship faster, understand less, and eventually discover what the developers were trying to tell them.


Sources

  • Dave Gauer. “A Programmer’s Loss of Identity.” ratfactor.com, February 13, 2026.
  • Samuel Bagg. Social identity theory, as applied by Gauer.
  • Stack Overflow. “2025 Developer Survey.” 49,000+ respondents.
  • METR. “Randomized Controlled Trial: AI Tools and Developer Productivity.” 2025.
  • Stanford Digital Economy Lab. Developer employment data, 2022-2025.
  • Kent Beck. “Pinhole View.” Tidy First (Substack), February 2026.
  • Veracode. “State of AI-Generated Code Security.” 2025. 100+ LLMs tested.
  • Deloitte. “Tech Trends 2026.” 93/7 technology-to-people investment ratio.

At Victorino Group, we help organizations build governance infrastructure that treats developer judgment as an asset, not an obstacle. If your AI adoption strategy is creating compliance without commitment, let’s talk.

If this resonates, let's talk

We help companies implement AI without losing control.

Schedule a Conversation