Claude Code Makes You Faster. Faster Builds the Wrong Product Faster.

TV
Thiago Victorino
6 min read
Claude Code Makes You Faster. Faster Builds the Wrong Product Faster.
Listen to this article

Ethan Ding wrote a sentence in May that engineering leaders should pin above their dashboards: Claude Code is not making your product better. It is making your code faster. Those are different statements, and the conflation between them is now the most expensive measurement error inside engineering organizations.

Coding agents collapse one axis. They collapse the speed of producing code that compiles, passes tests, and looks like the code a senior engineer would have written. They do not collapse the axis of judgment about what should be built, when to stop building it, or whether the thing being built was worth building in the first place. Those two axes are independent. Treating them as the same axis is how teams generate beautiful velocity charts attached to products nobody wanted.

This is a different argument from the one we made in Claude Code Auto Mode and Cognitive Debt. Those posts examined safety and skill atrophy. This one is about a quieter failure mode: shipping the wrong product, faster, with cleaner commits.

The Metric Most Leaders Are Running With

Pull request velocity. Lines of code per developer per week. Story points closed. Tasks shipped per sprint. These metrics existed before AI coding agents. AI made them dramatically easier to inflate.

The pattern is consistent across the teams we work with. A leader pilots Claude Code or a similar tool. Throughput jumps. The first month feels transformative. Six months later, the product roadmap has expanded by 40 percent, the codebase has grown by something larger, the on-call burden is up, and the conversion rate of the actual product has not moved.

The throughput metric was real. It was also irrelevant to the question the leader was actually trying to answer, which was whether the product was getting better.

Ding makes the point sharply: senior engineers and early-stage products see the largest raw speed gains, because both groups have the highest ratio of “I know exactly what to type” to “I know what should exist.” When you already know what to build, an agent that types faster is enormous leverage. When you do not, the agent helps you build the wrong thing more efficiently.

The Carrying Cost Nobody Charges Against the Wins

Software has weight. Every feature shipped becomes a feature that must be maintained, monitored, supported, deprecated, and eventually removed. Coding agents reduce the cost of writing that feature. They do not reduce, and may increase, the cost of carrying it.

Three forces compound. The first is surface area. Agents are excellent at adding capability. They are mediocre at removing it. Codebases under heavy agent-assisted development tend to grow faster than codebases under purely human development, because the marginal cost of adding a feature has dropped while the marginal cost of removing one has not.

The second is internal complexity. Agents produce code that works. They do not always produce code that fits. A function written by Claude Code in one session looks slightly different from a function written by Claude Code in a different session, even when both functions do similar things. Multiply that by a year of agent-assisted development and you get a codebase with the surface area of a senior team and the internal coherence of an outsourced bidding war.

The third is dependency drift. Agents reach for libraries. They reach for the libraries that appeared most often in their training data. Those libraries may not match your team’s standards, your security posture, or your existing patterns. Every reach is a small drift. Drift compounds.

None of this shows up on a velocity chart. It shows up two years later, when the team that shipped fast in 2026 cannot move at all in 2028 because the carrying cost of the thing they built has eaten the budget for building anything new.

The Under-Discussed Bottleneck

Ding’s central point is the one most worth sitting with. The bottleneck in product quality was never typing speed. It was taste. It was judgment about what to build. It was the discipline to say no to features that would compile, ship, and degrade the product. Agents do not help with any of that. They help with everything downstream of that decision.

This reframes a common executive question. Leaders ask “how do we get more out of Claude Code.” The better question is “now that Claude Code has removed the typing constraint, what is the actual constraint.” For most teams, the actual constraint is product judgment, which is held by a small number of people, scales poorly, and was already the bottleneck before AI arrived. Removing the typing constraint did not solve that problem. It made it more visible.

The teams getting outsized leverage from coding agents are the teams whose product judgment is strong, whose taste is articulate, whose definition of “done” is precise, and whose willingness to delete code is greater than their willingness to add it. For those teams, agents are a force multiplier. For everyone else, agents are an amplifier of whatever was already true.

If your product was drifting before Claude Code, Claude Code will help it drift faster.

The Source Caveat

This argument synthesizes Ethan Ding’s piece, which is opinion-leadership, not data-led. Ding does not present a controlled study. He presents a pattern from his own observation and the observations of leaders he talks to. We agree with the pattern because it matches what we see in client engagements and what our own data on AI productivity has shown across the 60 percent ungoverned cohort. But the strength of the claim is the strength of pattern recognition, not regression analysis. Treat it accordingly.

The pattern still has practical implications.

Do This Now

Audit the AI ROI metric your organization is currently using. If it measures throughput (PRs, lines, tasks, stories, velocity), you are measuring something that AI inflates without measuring whether the product is getting better. Replace it.

The replacement is uncomfortable because it is harder to compute. Outcome metrics like activation rate, retention curve, support ticket volume, conversion to paid, and feature adoption rate are the metrics that tell you whether the team is building the right thing. These metrics do not respond cleanly to throughput. They respond to taste, to judgment, to the willingness to kill features. AI coding agents do not move these metrics directly. They move throughput, which sometimes correlates with outcomes and sometimes inversely correlates with them.

Pick three outcome metrics that matter to your business. Track them monthly against your AI tool spend and your shipped feature count. If outcome metrics are flat or declining while shipped feature count is climbing, you are in the trap. The fix is not more agents. The fix is fewer features, sharper judgment about what to build, and explicit budget for deletion.

The CFO question for AI in 2026 is not “how much faster are we shipping.” It is “are the things we are shipping making the product worth more.” Coding agents help with the first question. They are silent on the second.


This analysis synthesizes Claude Code Is Not Making Your Product Better (Ethan Ding, May 2026).

Victorino Group helps engineering and product leaders measure AI ROI on outcomes, not throughput. Let’s talk.

All articles on The Thinking Wire are written with the assistance of Anthropic's Opus LLM. Each piece goes through multi-agent research to verify facts and surface contradictions, followed by human review and approval before publication. If you find any inaccurate information or wish to contact our editorial team, please reach out at editorial@victorinollc.com . About The Thinking Wire →

If this resonates, let's talk

We help companies implement AI without losing control.

Schedule a Conversation