- Home
- Thinking...
- The Dead Companies Walking: What Steve Yegge Sees Coming
The Dead Companies Walking: What Steve Yegge Sees Coming
Steve Yegge has been writing code for over forty years. He has shipped production systems at Amazon, Google, and Grab. When he calls something, he has the scar tissue to back it up.
In a recent interview with Gergely Orosz for The Pragmatic Engineer, Yegge laid out a picture of software engineering’s near future that is uncomfortable, specific, and — based on everything we see working with enterprise clients — largely correct. But not for the reasons most people will focus on.
The headlines will grab the “50% staff reduction” prediction and the “big companies are dead” quote. Those are attention-worthy. But the deeper insight is structural: the bottleneck in AI-augmented engineering is not the intelligence of the tools. It is the absorptive capacity of the organization.
This distinction changes everything about how you should respond.
The Acceleration Is Real
Yegge describes a compression in the half-life between major model releases — from four months to two months. Each new generation absorbs the failures of the previous one into its training data, creating a compounding improvement cycle. Erik Meijer, whom Yegge cites, puts it bluntly: “the days of coding by hand are over.”
We have seen this firsthand. In our January essay on the phase shift in software engineering, we documented Andrej Karpathy’s flip from 80% manual code to 80% AI agents in a matter of weeks. Boris Cherny at Anthropic shipped 259 pull requests in 30 days — every line written by Claude Code. These are not marketing claims. They are field reports from engineers working on real codebases.
Yegge frames this as an S-curve: “We’re heading into the steep part this year.” What makes his framing useful is that he is not predicting a plateau. He is predicting acceleration. The models are getting better faster than organizations can adapt to the current generation.
This creates what we have previously called the capability overhang — the growing distance between what AI can do and what organizations actually use. Yegge’s contribution is showing what happens when that overhang becomes unsustainable.
The Eight Levels of AI Adoption
Yegge proposes an eight-level framework for AI adoption in software engineering:
- Levels 1-3: From no AI usage to IDE-integrated coding agents with varying degrees of trust. The engineer is still the primary producer. The agent assists.
- Levels 4-5: The inversion point. Agents do the primary work. Engineers review and direct. Code review shifts from human-written code to agent-generated code.
- Levels 6-7: Multiple agents working in coordination. The engineering challenge shifts from writing code to orchestrating agent interactions.
- Level 8: Custom orchestrator systems — what Yegge calls “Gas Town” — that manage networks of agents with structured coordination.
This framework matters because it gives organizations a way to locate themselves. Most enterprise teams we work with are somewhere between levels 2 and 4. They are using Copilot or Cursor, getting real value from code completion, but have not crossed the inversion point where the agent becomes the primary producer.
The uncomfortable truth in Yegge’s framework is that engineers at lower levels risk obsolescence — not because they are bad engineers, but because they are competing against engineers who have crossed the inversion point and are operating at 10x or 100x their output.
But here is where Yegge’s framework needs a critical amendment: the levels are not just about individual engineers. They are about organizational capacity. An engineer at Level 7 inside an organization operating at Level 2 is bottlenecked by everything downstream — review processes, deployment pipelines, compliance requirements, and the absorptive capacity of the teams consuming their output.
The Absorption Problem
This is the most important idea in Yegge’s interview, and the one that will get the least attention.
He describes large companies as structurally unable to absorb the output of hyper-productive engineers. The downstream systems — QA, compliance, deployment, product management — are calibrated for human-speed production. When a single engineer suddenly produces at 10x or 100x, the output has nowhere to go.
Yegge’s phrase is memorable: “Big, dead companies. We just don’t know they’re dead yet.”
He compares this to the early days of cloud computing, when incumbent companies dismissed AWS as a toy while small teams used it to build what became the next generation of dominant companies. The parallel is apt. The advantage is not just in having faster engineers. It is in having organizational structures that can metabolize faster engineering.
Small teams have this advantage by default. A three-person team where everyone operates at Level 7 has no absorption bottleneck because the same people producing the code are also reviewing, deploying, and operating it. A 5,000-person engineering organization has layers of process designed for a world where code production was the scarce resource. In an AI-augmented world, code production is abundant. The scarce resource is judgment, review, and coordination.
This is why Yegge predicts innovation will migrate to small, AI-augmented teams. Not because small teams have better tools — everyone has access to the same models — but because small teams have better organizational throughput.
The 50% Prediction and the Amazon Example
Yegge predicts that large companies will cut approximately 50% of their engineering staff, partly to offset the cost of AI tokens and partly because the remaining engineers, properly equipped, can maintain the same output.
He uses Amazon as a specific example, criticizing its layoff of 16,000 people as lacking an AI strategy. The cuts happened for cost reasons, not as part of a deliberate transition to AI-augmented engineering. That distinction matters. Cutting headcount without restructuring workflows is not an AI strategy. It is a cost reduction that happens to coincide with the AI era.
The organizations that navigate this transition successfully will not be the ones that cut fastest. They will be the ones that restructure their engineering workflows around AI augmentation before cutting — ensuring that the remaining team has the tools, processes, and governance infrastructure to maintain quality at higher throughput.
We have argued this point before: don’t fire your team, govern your AI. The team is how you govern it. Yegge’s prediction adds urgency to this argument but does not change its direction.
The Dracula Effect: A Human Constraint
One of the most original observations in the interview is what Yegge calls the “Dracula Effect” — the physical and cognitive exhaustion that comes from working at full AI-augmented intensity.
He reports that engineers operating at maximum AI-assisted productivity can sustain it for roughly three hours per day. After that, the cognitive load of directing, reviewing, and integrating AI output becomes overwhelming. He describes daytime napping. Startup engineers report the same phenomenon.
This matters strategically because it places a human constraint on the theoretical productivity gains. If an AI-augmented engineer is 100x more productive per hour but can only sustain that intensity for three hours, the net daily output is different from what the per-hour multiplier suggests.
It also creates a management challenge. Organizations that expect eight hours of AI-augmented intensity will get burnout and attrition, not productivity. The smart response is to redesign the workday around bursts of high-intensity AI-directed work, with the remaining time allocated to the activities that AI cannot replace: architectural thinking, stakeholder alignment, strategic planning, and recovery.
This is counterintuitive. The more powerful the tools become, the shorter the effective workday — at peak intensity. The value of the engineer shifts from hours spent to decisions made.
Manual Coding Is No Longer the Differentiator
Yegge makes a claim that will be emotionally difficult for many engineers: knowing how to code manually no longer distinguishes you. Programming language choices “have never mattered less.”
This echoes what we have been documenting. In the phase shift essay, we noted that the differential value of the exceptional engineer is no longer in typing speed or API memorization. It is in the ability to define problems, evaluate trade-offs, and maintain architectural coherence when the agent cannot.
Yegge adds a hopeful counterpoint: software demand continues to grow. The democratization of development — he predicts his non-technical wife could become a meaningful contributor to a software project through AI tools — means the total volume of software built will increase. Employment opportunities remain, but they are fundamentally transformed.
The engineers who thrive will be those who can operate at higher levels of Yegge’s framework: orchestrating agents, designing systems, governing output quality. The engineers who resist the transition — insisting that manual coding skill is the measure of engineering competence — will find themselves increasingly outperformed by colleagues who embraced the new model.
Grief and Renewal
Yegge closes with something unexpectedly human. He acknowledges that for engineers who spent decades building manual coding expertise, watching that skill become commoditized involves a genuine grief process. Skills they invested years in mastering are no longer scarce.
But he also describes building software as “more fun than ever.” The tedious parts — boilerplate, syntax, debugging trivial errors — are increasingly handled by agents. What remains is the creative, architectural, strategic work that attracted most engineers to the field in the first place.
This duality — grief for the old skill set, excitement for the new one — is something we see in every client engagement. The organizations that process this transition well are the ones that name it honestly. Pretending nothing has changed demoralizes the team. Pretending everything has changed terrifies them. The truth is in between: the core of engineering — solving problems with structured thinking — is unchanged. The surface layer — how you express solutions in code — is being automated.
What This Means for Your Organization
Yegge’s interview reinforces three principles we have been advocating:
Governance over speed. The organizations that win are not the ones that adopt AI fastest. They are the ones that build the governance infrastructure to adopt AI safely and sustainably. Yegge’s 50% prediction will tempt leaders into rapid, ungoverned cuts. Resist that temptation. Restructure first, then right-size.
Orchestration is the real capability. Yegge’s Level 8 — custom orchestrator systems managing agent networks — is where the strategic advantage concentrates. This is not a developer productivity tool. It is an organizational operating system. The companies that build this capability will have a structural advantage that is difficult to replicate.
The absorption bottleneck is the binding constraint. It does not matter how fast your engineers can produce code if your organization cannot review, deploy, and operate it at the same speed. Before investing in more AI tooling, invest in the downstream infrastructure that allows you to use what you already have.
The S-curve Yegge describes is real. The steep part is here. The question is not whether your engineering organization will be transformed. It is whether that transformation will be deliberate or chaotic.
Deliberate transformation requires governance. Governance requires infrastructure. Infrastructure requires investment — not in models, but in the organizational systems that make models useful.
That is where the work is. That is where the advantage is built.
Recommended Reading
This essay draws on Steve Yegge’s interview with Gergely Orosz, published February 10, 2026 in The Pragmatic Engineer. The full interview contains significantly more detail on all the topics covered here, including Yegge’s personal journey with AI tools, specific technical observations about agent orchestration, and his predictions for the industry over the next two to five years.
I strongly recommend reading the full piece: Steve Yegge on AI Agents and the Future of Software Engineering by Gergely Orosz, The Pragmatic Engineer.
If you do not already subscribe to The Pragmatic Engineer, it is one of the most consistently valuable publications in the software industry. Gergely combines insider access with rigorous analysis in a way that few other writers achieve.
Sources
- Steve Yegge. Interview with Gergely Orosz. “Steve Yegge on AI Agents and the Future of Software Engineering.” The Pragmatic Engineer, February 10, 2026.
- Andrej Karpathy. “A few random notes from claude coding.” X/Twitter, January 2026.
- Boris Cherny. 259 PRs with Claude Code. X/Twitter, December 2025.
- Erik Meijer. Cited in Yegge interview, The Pragmatic Engineer, February 2026.
- Gene Kim and Steve Yegge. “Vibe Coding” (co-authored work referenced in interview).
At Victorino Group, we help organizations build the governance and orchestration infrastructure that makes AI-augmented engineering sustainable. If your team is navigating this transition — whether you are at Level 2 or Level 7 — we can help you move deliberately. Contact us or visit victorinollc.com.
If this resonates, let's talk
We help companies implement AI without losing control.
Schedule a Conversation