- Home
- The Thinking Wire
- The AI Workforce Reckoning: When Irreversible Decisions Meet Unproven Capabilities
The AI Workforce Reckoning: When Irreversible Decisions Meet Unproven Capabilities
In February 2026, Block announced it was eliminating roughly 4,000 positions, approximately 40% of its workforce. CEO Jack Dorsey framed the move explicitly around AI: “Intelligence tools paired with smaller, flatter teams enable a new way of working.” The company set a target of $2 million in gross profit per employee, four times its baseline.
The stock surged 25%, from $54.53 to $67.17.
This is the story that made the rounds. A bold CEO bets on AI, slashes headcount, and the market rewards him for it. Efficient. Visionary. The future of work.
There is another version of this story, and it starts with a number nobody wanted to discuss.
The Overhiring Correction Nobody Labeled
Block tripled its headcount between 2019 and 2024. The ZIRP era (zero interest rate policy) made hiring cheap. Growth was the only metric anyone cared about. When rates rose and growth slowed, Block was carrying three times the workforce it needed for its actual revenue trajectory.
Oxford Economics published a study in January 2026 examining 55,000 jobs attributed to “AI layoffs” across the United States. Their finding: these cuts represented only 4.5% of total US job losses during the same period. Economist Ben May was direct about what the data suggested: “We suspect some firms are trying to dress up layoffs as good news story.”
The distinction matters. A company that triples its headcount during cheap money, then cuts 40% when money gets expensive, is not demonstrating AI transformation. It is demonstrating a correction. The AI framing makes the correction sound strategic rather than reactive, and that framing has real consequences for how other organizations interpret the signal.
WPP told a similar story. The advertising giant announced £500 million in savings through “Elevate28,” a restructuring program positioned around AI and organizational efficiency. The company subsequently fell out of the FTSE 100. CEO Mark Read, in a moment of candor that contradicted his own AI narrative, attributed the company’s struggles to “excessive organisational complexity.” Not AI disruption. Not market disruption. Organizational debt.
The Perception Chasm
If the executive suite and the workforce experienced AI the same way, the debate about AI-driven layoffs would be straightforward. They do not.
A Section survey of 5,000 respondents (reported via the Wall Street Journal) found a striking disconnect. More than 40% of C-suite executives claimed AI saved them 8 or more hours per week. Among non-managers, 67% reported saving fewer than 2 hours.
This is not a minor discrepancy. It is a 4x difference in perceived value between the people making workforce decisions and the people affected by them. When a CEO announces that “AI tools enable a new way of working,” they may genuinely believe it. Their experience of AI may, in fact, be transformative. But that experience does not transfer to the organization. The receptionist, the account manager, the operations coordinator: they live in a different reality.
As we explored in The AI Intensity Trap, the pattern we described there is now playing out at scale. The perception mismatch between felt productivity and measured productivity is not a curiosity. It is the mechanism through which organizations make irreversible workforce decisions based on unreliable data.
The Klarna Warning
Klarna provides the clearest cautionary tale. The fintech company aggressively cut approximately 700 positions, framing the move as an AI-first transformation. CEO Sebastian Siemiatkowski initially celebrated the results. Then quality degraded. Customer service suffered measurably. Siemiatkowski eventually admitted publicly: “we went too far.”
Klarna is not alone. An Orgvue/Forrester survey found that 55% of companies that replaced human roles with AI regretted the decision. More than one in three HR leaders reported losing critical skills. Nearly a third found that rehiring cost more than the savings generated by the cuts.
These are not edge cases. They are the median outcome. The majority of organizations that treated AI as a direct substitute for human labor discovered that the substitution destroyed something they could not easily rebuild.
Kent Beck’s framework, which we examined in The Pinhole View of AI Value, predicted exactly this. Companies that evaluate AI through the single lens of headcount reduction will miss the value and magnify the damage. The pinhole view turns a complex organizational question into arithmetic, and the arithmetic is wrong.
The Market Incentive Problem
Wall Street does not reward AI transformation. It rewards financial guidance.
Block announced 4,000 cuts alongside an earnings beat that exceeded analyst expectations. Stock: up 25%. C3 AI, a company that actually sells AI products, announced layoffs in the same period. But C3 AI missed its earnings expectations. Stock: down 17%.
WPP wrapped its cuts in AI language. Revenue was declining. Stock: down 6%.
The pattern is simple. The market rewards the word “AI” only when the underlying financials support the narrative. When they don’t, the AI framing provides no protection. But the headlines don’t parse it that way. “Block surges 25% after AI restructuring” implies causation that does not exist. The surge came from the EPS beat. The AI language came from the press release.
This creates a dangerous incentive loop. Executives see that AI-framed layoffs generate positive coverage. They frame their own layoffs accordingly. The coverage reinforces the narrative. Other executives adopt the same framing. Within a cycle, “we’re using AI” becomes shorthand for “we’re cutting people,” and the market treats them as synonymous.
The losers in this loop are the organizations that take the narrative literally. They hear “Block cut 40% and the market loved it” and conclude that cutting headcount is the AI strategy. They skip the part where Block was carrying three years of zero-interest-rate bloat.
The Governance Vacuum
Here is the part that should alarm boards. A Harvard Law School Forum paper published in December 2025 found that no legal obligation currently exists for corporate boards to verify that AI can actually perform the work of eliminated positions before approving workforce reductions.
Read that again. A board can approve the elimination of thousands of roles based on an assertion that AI will absorb the work. No verification required. No capability assessment mandated. No fallback plan necessary.
This is not a hypothetical concern. Block’s stated target is $2M gross profit per employee. Achieving that target requires AI tools to absorb the work of 4,000 people. If those tools fall short, the company has two choices: overwork the remaining staff or sacrifice the output. Both options are expensive. Neither is easily reversible.
The Klarna reversal is the proof case. Cutting is fast. Rebuilding is slow and costs more than the original savings. When 55% of companies regret AI replacement decisions, the governance question is not whether boards should verify AI capabilities before approving cuts. It is why they are not already required to.
Knowledge Loss Is a One-Way Door
Amazon’s Jeff Bezos popularized the distinction between one-way and two-way doors. Two-way door decisions are reversible: you can walk back through if the result is bad. One-way door decisions are not. You make them, you live with them.
Workforce reductions are one-way doors dressed up as two-way doors. The spreadsheet says you can always rehire. The reality is different.
One in three HR leaders reported losing critical institutional knowledge after AI-driven cuts. Nearly a third found that rehiring cost more than the original savings. These numbers quantify what any experienced operator knows intuitively: the person who understands why the system was built that way, who remembers the client’s preferences, who knows which vendor contact actually gets things done, that person does not come back when you post the job listing again.
The knowledge is not in the documentation. It never is. It lives in the judgment of people who accumulated it over years. When you eliminate those people based on the assumption that an AI tool can absorb their contribution, you are betting the tool can replicate not just their output, but their context. No current AI tool can do this. Not at Block’s scale. Not at anyone’s scale.
What Governance Should Look Like
The current approach treats AI workforce decisions as financial decisions. Cut headcount, model the savings, announce the efficiency. The missing layer is verification.
Capability verification before cuts. Before approving the elimination of any role, require documented evidence that AI tools can perform the critical functions of that role at acceptable quality levels. Not a demo. Not a pilot. Production-grade evidence from sustained operation.
Reversibility assessment. For each proposed cut, answer the question: what happens if the AI falls short? If the answer involves rehiring at a premium or degraded service quality, the decision is a one-way door and should be governed accordingly. Board review. Risk analysis. Contingency planning.
Perception audit. Before using C-suite productivity data to justify workforce changes, validate it against ground-level reality. The Section survey’s 4x perception difference between executives and workers is not an anomaly. It is a structural bias that corrupts decision-making.
12-month capability window. Instead of cutting first and hoping AI catches up, establish a verification period. Deploy the tools. Measure actual absorption of work. Cut only the roles where AI demonstrably handles the load. This is slower. It is also cheaper than the Klarna path of cutting, regretting, and rebuilding.
Board accountability. The Harvard Law Forum identified the absence of legal obligation. Organizations do not need to wait for regulators. Boards can voluntarily adopt AI workforce governance standards. The ones that do will avoid the one-way door problem. The ones that don’t are betting that their CEO’s conviction about AI capabilities is more reliable than the 55% regret rate suggests.
The Reckoning
Block may prove right. AI may absorb the work of 4,000 people and the company may hit $2M gross profit per employee. If it does, Dorsey will deserve credit for a bold call made ahead of the evidence.
But the evidence so far points in the other direction. The majority of organizations that cut aggressively for AI regret it. The perception data that executives rely on is systematically biased. The market incentive rewards the announcement, not the outcome. And no governance mechanism exists to verify the bet before it is placed.
The reckoning is not about whether AI can transform work. It can. The reckoning is about the distance between what AI can do today and the irreversible decisions organizations are making based on what they believe it will do tomorrow.
That distance is where governance belongs. And right now, almost nobody is there.
This analysis synthesizes Oxford Economics AI Layoffs Study (January 2026), Block Q4 2025 Earnings and Restructuring (February 2026), WPP Elevate28 Filing (February 2026), Klarna CEO Reversal (2025), Orgvue/Forrester AI Replacement Regret Survey (2025), Section AI Perception Survey via WSJ (2025), Harvard Law School Forum on AI Board Oversight (December 2025), and C3 AI Earnings (February 2026).
Victorino Group helps organizations build governance frameworks before making irreversible workforce decisions. Let’s talk.
If this resonates, let's talk
We help companies implement AI without losing control.
Schedule a Conversation