- Home
- The Thinking Wire
- The Mexican Standoff: What Happens When Everyone Can Do Everyone's Job
The Mexican Standoff: What Happens When Everyone Can Do Everyone's Job
Marc Andreessen described the situation in terms borrowed from crime cinema: “Every engineer now thinks they can be a PM and a designer. Every PM thinks they can code and design. Every designer thinks they can do the other two.”
He meant it as an observation about the state of AI-enabled teams. It is also, accidentally, a precise description of a governance crisis.
In a Mexican standoff, everyone has a weapon pointed at everyone else. Nobody shoots because the result would be mutual destruction. The tension holds. But nothing productive happens, either.
That is where a growing number of product organizations find themselves right now. Not because AI took anyone’s job. Because AI made everyone’s job boundaries dissolve.
The Numbers That Contradict the Narrative
Anthropic published research on March 5, 2026 that complicates the dominant story about AI and employment. Researchers Maxim Massenkoff and Peter McCrory introduced a metric called “observed exposure,” which combines theoretical LLM capability with actual real-world usage data from Anthropic’s own systems.
The headline number: Computer Programmers sit at 75% coverage. Highest of any occupation measured. Data Entry Keyers follow at 67%, then Customer Service Representatives.
The surprising finding: “No systematic increase in unemployment for highly exposed workers since late 2022.”
Read that again. The occupation with the highest AI exposure in the entire economy shows no measurable unemployment increase over three years of rapid AI deployment. This is not a short window of data. Three years is enough time for at least early displacement effects to register. They did not.
There is one exception worth noting. Young workers aged 22 to 25 in exposed occupations showed a 14% drop in job-finding rates. The researchers flagged it as “just barely statistically significant.” Entry-level hiring is tightening, even as overall employment in these roles holds steady.
Thirty percent of workers have zero AI exposure at all. The ones who do are disproportionately female (16 percentage points more likely), higher-earning (47% more on average), and better-educated (17.4% hold graduate degrees versus 4.5% among unexposed workers). AI exposure is concentrated among knowledge workers, not distributed across the economy.
The Real Story Is Not About Jobs
If AI covers 75% of what programmers do and zero programmers are losing their jobs, what exactly is happening?
The answer is visible in any product team that has adopted AI coding tools seriously. The PM opens Claude Code and ships a PR. The designer uses Cursor to prototype a component in production code. The engineer uses an AI agent to draft a product brief. Everyone is reaching into adjacent territory because the cost of doing so dropped to near zero.
Justin Jackson documented this pattern in his March 2026 essay. A software company president told him: “Where you really see the impact on jobs with us is in the people we’re no longer hiring: specialists. In this new era, generalists win.”
Kent Beck framed the personal calculus bluntly: “The value of 90% of my skills just dropped to $0. The leverage of my remaining 10% went up a thousand.”
That remaining 10% is judgment. Ben Werdmuller put it this way: “AI coding shifts the center of gravity from implementation to judgment.” Production skills depreciate. Evaluation skills appreciate. The person who knows what to build becomes more valuable than the person who knows how to build it.
As we explored in The Pinhole View of AI Value, fixating on headcount reduction misses three of the four value levers AI provides. The standoff dynamic is a concrete example. Nobody is getting eliminated. But everyone’s contribution is being redefined.
Why This Breaks Product Teams
Traditional product teams are organized around production specialties. The PM produces specs. The designer produces mockups. The engineer produces code. The QA engineer produces test results. Each role is defined by its output artifact.
When AI can produce passable versions of all these artifacts, the org chart stops mapping to reality. The PM who ships a working prototype has bypassed the engineer’s production function. The engineer who generates a product brief has bypassed the PM’s production function. Neither has bypassed the other’s judgment function, but nobody notices that distinction in the moment.
We covered this structural shift in Your Product Team Was Designed for a World That No Longer Exists. The argument there was about organizational redesign. The standoff adds a new dimension: even within a team that has not been reorganized, individuals are unilaterally redefining their own scope.
This creates three specific problems.
First, duplication. Two people solve the same problem independently because both can. The PM builds a prototype while the engineer builds the same feature from the spec. Neither knows the other started. The waste is invisible until review time.
Second, quality variance. A PM writing code produces different quality than an engineer writing code, even when both use AI tools. The 75% coverage number from Anthropic’s research is an average. It means AI handles roughly three-quarters of routine programming tasks. The remaining quarter requires genuine expertise. A PM using AI can produce the 75% faster than an engineer working manually. But they hit a wall on the 25% that requires architectural understanding, edge-case reasoning, and systems thinking. That wall is where bugs live.
Third, accountability collapse. When everyone can contribute everywhere, nobody owns anything. A feature breaks in production. The PM wrote the initial prototype. The engineer refactored it. The designer adjusted the CSS. The QA agent flagged a warning that nobody acted on. Who is responsible? In a role-defined org, the answer is clear. In a standoff, every gun is pointed at every other gun.
What Anthropic’s Data Actually Reveals
The observed exposure methodology is worth examining closely. Previous measures of AI risk to jobs relied on theoretical assessments: could an LLM theoretically perform this task? Anthropic’s contribution is combining that theoretical ceiling with actual usage data.
The result is sobering for AI hype and AI panic alike.
Computer and Math occupations have 94% theoretical feasibility. Meaning, in principle, LLMs could handle 94% of the tasks in these jobs. Actual coverage is 33%. The utilization rate is roughly one-third of the theoretical maximum.
Programmers are the outlier at 75%. Most other occupations sit far below their theoretical ceiling. AI is far from reaching its theoretical capability. Actual coverage remains a fraction of what is feasible.
This finding reframes the standoff. Everyone feels like they can do everyone else’s job because the AI tools are impressive in demos and for the 75% of tasks they cover well. But production work requires the other 25%. The standoff collapses the moment a non-trivial problem appears and the PM’s prototype falls apart, or the engineer’s product brief misses a market constraint, or the designer’s code introduces a security vulnerability.
The standoff is a perception problem masquerading as a capability problem.
The Young Worker Signal
The one demographic showing measurable impact deserves separate attention.
Workers aged 22 to 25 in AI-exposed occupations face a 14% drop in job-finding rates. This is not mass unemployment. It is a narrowing of entry points. Companies are hiring fewer juniors because AI covers the work that junior roles traditionally performed: boilerplate code, basic customer service scripts, data entry tasks.
As we documented in The AI Workforce Reckoning, the companies making the boldest “AI transformation” claims are often the ones dressing up cost corrections as strategy. The young-worker signal fits this pattern. Organizations are not replacing juniors with AI. They are using AI as justification for not backfilling positions they would have struggled to justify anyway in a tighter capital environment.
The long-term risk is real, though. If entry-level roles compress, the pipeline that produces senior talent narrows. You cannot have experienced engineers without a decade of junior engineers learning the craft. The standoff accelerates this: if everyone can do basic production work with AI, what is the entry point for someone who has not yet developed the judgment that Kent Beck’s “remaining 10%” requires?
Nobody is governing this pipeline. That is the actual threat.
Governance Is the Missing Piece
The standoff resolves one of two ways.
In the ungovern scenario, teams fragment into individual contributors who each use AI to do a little bit of everything. Quality is inconsistent. Duplication is rampant. The 25% of work that requires genuine expertise gets done poorly or not at all. Organizations spend more time coordinating overlapping efforts than they save from AI productivity.
In the governed scenario, role boundaries evolve from “what you produce” to “what you are accountable for.” The PM still owns product judgment. The engineer still owns technical judgment. The designer still owns user experience judgment. But the production work underneath those judgments is shared, automated, or both. The org chart reflects decision authority, not production specialty.
37signals has operated with two-person teams (one designer, one programmer) for years. Jackson highlights this as a model for the AI era. It works because it is small enough to avoid coordination overhead and clear enough in role division to avoid the standoff. One person owns how it looks and feels. The other owns how it works. Both use whatever tools make them faster.
Scaling this requires explicit governance. Who approves a PR from a PM? Who reviews a product brief from an engineer? What quality bar applies when AI generates 75% of the artifact and a human completes the remaining 25%? These questions do not answer themselves.
The Standoff Is Temporary
Mexican standoffs in films end one of two ways: someone shoots, or someone puts down their gun. In product organizations, the equivalent choices are destructive competition or deliberate coordination.
The Anthropic data suggests the urgency. AI coverage will keep climbing. The 33% utilization rate for Computer and Math occupations will not stay at 33%. As models improve and tools mature, the theoretical ceiling and the observed floor will converge. More people will be able to do more of everyone else’s work. The standoff intensifies.
Organizations that define role governance now, while the standoff is still manageable, will adapt smoothly as coverage increases. Organizations that wait will find themselves in the destructive version: turf wars, duplication, quality crises, and the kind of organizational chaos that no AI tool can fix.
The Mexican standoff was never about AI capability. It is about organizational clarity. Every team has the weapons. The question is whether anyone is directing where they point.
This analysis synthesizes Labor Market Impacts of AI by Maxim Massenkoff and Peter McCrory (Anthropic, March 2026) and Will Claude Code Ruin Our Team? by Justin Jackson (March 2026).
Victorino Group helps organizations redesign team governance for the age of AI-enabled generalists. Let’s talk.
If this resonates, let's talk
We help companies implement AI without losing control.
Schedule a Conversation