- Home
- The Thinking Wire
- The Speed Illusion Has Three Names Now
The Speed Illusion Has Three Names Now
In 1900, American factories began installing electric motors. Thirty years later, most of them had finally figured out how to benefit from the change.
The economist Paul David documented this in a 1990 paper that technology optimists would rather forget. Electrification did not produce measurable productivity gains for three decades. Not because the technology was bad. Because factories bolted electric motors onto steam-era layouts, preserving the old architecture while adding a new power source. The gains only materialized when manufacturers redesigned their entire production system around what electricity actually made possible.
This week, three independent analysts published pieces that, read together, describe the same delay happening with AI. Different authors. Different audiences. Different vocabularies. Same diagnosis: speed without structural change produces the illusion of progress, not the substance of it.
Source One: The Build-Build-Build Loop
Alvaro Lorente, writing at The Engineering Tax on March 18, identifies a pattern in software teams adopting AI coding tools. The classic product development cycle is “build, learn, adjust.” Teams write code, observe what happens, and revise their approach based on what they learned.
AI tooling has collapsed that cycle into “build, build, build.”
Lorente’s argument is precise. Teams are shipping more code. Volume is up. But delivery metrics remain flat. Developers report feeling 50% or more productive in surveys. Controlled experiments on complex tasks tell a different story.
The bottleneck, Lorente argues, has moved. It is no longer in code production and was never primarily there to begin with. It is in understanding. When you generate code faster than you can comprehend what it does, you have not accelerated development. You have accelerated the accumulation of things nobody fully understands. As we explored in The Speed Trap, code writing is roughly 20% of the delivery lifecycle. Optimizing that 20% while the other 80% stays the same does not produce a 20% gain. It produces congestion.
What Lorente adds to the picture is the cognitive dimension. Speed without comprehension is not just operationally wasteful. It degrades the team’s ability to learn from what they build. And learning is the mechanism through which software teams improve over time.
Source Two: The 30-Year Echo
Omar El-Ayat at Euclid VC, writing March 19, takes a wider lens. His piece opens with General Motors’ $40-45 billion investment in factory robotics during the 1980s. GM treated robots as direct replacements for human labor. The robots performed individual tasks faster. The overall system did not improve. Toyota, which invested a fraction of that amount but redesigned its production process around what automation could do, overtook GM.
Toyota never sold a robot. They shipped a better car.
El-Ayat then pulls the frame wider. Three sectors (construction, healthcare, and transport) account for 37% of US work hours but only 24% of output. Productivity growth in those sectors has been below 0.5% annually since 2005. These are not industries starved of technology. They are industries that adopted technology without redesigning how work flows through the organization.
The historical parallel El-Ayat draws is David’s electrification study. The pattern is consistent across technology eras: a powerful new capability enters the system, organizations bolt it onto existing processes, measured productivity stalls or declines, and a long adaptation period follows before structural redesign unlocks the actual value.
El-Ayat also surfaces a trust number worth noting. Only 13% of people surveyed believe AI will do more good than harm. This is not Luddism. It is a rational response to watching organizations deploy a powerful tool without changing the systems around it. People can see that the speed is real and the results are not.
He directly opposes what he calls “Agenticism,” the belief that autonomous AI agents will simply replace human work at scale. His framing: this sells a myth because it assumes the technology is the constraint, when the constraint is organizational design.
Source Three: When Code Stops Being Scarce
Julie Beliao at Mozilla AI, writing March 17, approaches from the regulatory angle. Her observation is compact and uncomfortable: code is no longer scarce.
For decades, the limiting factor in software was the ability to produce it. Skilled developers were expensive and in short supply. Organizations competed for engineering talent because writing code was the bottleneck.
AI dissolved that scarcity. Code generation is now cheap, fast, and accessible to non-specialists. But when production becomes frictionless, the question shifts from “can we build it?” to “should we build it, and who decides?”
Beliao points to the regulatory response already underway. The EU AI Act imposes pre-market obligations on high-risk AI systems. In the United States, over 1,000 AI-related bills were introduced in 2025 alone. Legislators are scrambling to answer the question that organizations should have asked first: who has the standing to slow things down?
The governance vacuum Beliao describes is not hypothetical. It is the structural condition that makes the problems Lorente and El-Ayat identify possible. Teams build without learning because nobody has the authority or the process to mandate the learning step. Organizations bolt AI onto existing workflows because nobody has redesigned the workflow. Code floods the system because nobody governs the flow.
The Convergent Pattern
Three analysts. Three domains (engineering practice, venture capital, technology policy). Zero citations of each other. The same conclusion.
The pattern they describe has a specific shape:
- A new technology dramatically accelerates one step in a complex process.
- Organizations apply it to that step without restructuring the rest.
- Speed metrics improve. Outcome metrics do not.
- A period of frustration follows, during which practitioners blame the technology, skeptics declare it overhyped, and the real problem (organizational redesign) goes unaddressed.
- Eventually, organizations that redesign around the technology’s actual capabilities pull ahead. The rest fall further behind.
David documented this with electricity. El-Ayat documents it with factory robotics. Lorente documents it with AI coding tools. Beliao documents the absence of governance structures that would force the redesign to happen.
This is not a coincidence. It is a structural feature of how organizations absorb general-purpose technologies. The technology arrives faster than the institution can adapt, and the interim period looks like failure even though it is actually a design problem.
What Makes This Time Harder
Every technology era is different, and the differences matter. The electrification delay had a natural forcing function: physical factory redesign required capital investment and downtime. Organizations could not partially electrify forever. Eventually, the old steam infrastructure degraded and forced a rebuild.
AI has no equivalent forcing function. A team can run “build, build, build” indefinitely. The code compiles. The PRs merge. The dashboards show activity. Nothing physically breaks in a way that demands redesign. The degradation is invisible. It shows up in stagnant cycle times, in rising defect rates that nobody correlates with AI adoption, in the slow erosion of institutional knowledge as teams stop learning from what they build.
As Berkeley researchers found (documented in The AI Intensity Trap), AI does not reduce work. It intensifies it. Tasks expand, boundaries blur, multitasking increases. The tool makes everything feel adjacent and possible. Without governance to set boundaries, the intensity compounds.
The AI Verification Debt research adds another layer: 96% of developers distrust AI-generated code, yet only 48% verify it consistently. The trust deficit exists. The verification infrastructure does not. Speed without verification is not productivity. It is inventory with an expiration date nobody tracks.
The Question Nobody Is Asking
Lorente asks: are we building faster or just building more?
El-Ayat asks: are we repeating the GM mistake at a civilizational scale?
Beliao asks: who has the standing to slow things down?
These are three formulations of the same question. And the question nobody is asking is the one that matters most: what does the redesigned organization look like?
Not “how do we add AI to our current process.” Not “how do we govern AI use.” The deeper question: if we were building this organization today, knowing what AI can do, what would the workflow look like? How would decisions flow? Where would humans spend their time? What would the learning loops be?
David found that factories which redesigned around electricity did not just move faster. They organized differently. Single-story layouts replaced multi-story buildings. Machines grouped by workflow replaced machines grouped by power shaft. The gains came from rethinking the entire system, not from plugging in a better motor.
The organizations that will thrive with AI are doing the equivalent work now. Not adding AI to existing processes. Redesigning processes around what AI changes. That redesign requires governance, because governance is how organizations encode decisions about what to build, what to skip, and how fast to move.
Speed is easy. Everyone has speed now. The scarce resource is the institutional capacity to convert speed into outcomes. Three independent analysts, looking at the same week from three different angles, arrived at that conclusion separately.
The 30-year electrification delay was not caused by bad motors. It was caused by good motors installed in bad architectures. We are early in the same cycle. The question is whether it takes another thirty years to learn the lesson, or whether the organizations willing to redesign now can compress the timeline.
The motor works. The factory does not. Yet.
This analysis synthesizes The Illusion of Speed by Alvaro Lorente (March 2026), We Need to Talk About Agents by Omar El-Ayat (March 2026), and When Shipping Becomes Too Easy by Julie Beliao (March 2026).
Victorino Group helps organizations redesign around AI’s actual capabilities, not just deploy it faster. Let’s talk.
If this resonates, let's talk
We help companies implement AI without losing control.
Schedule a Conversation