- Home
- The Thinking Wire
- Laziness Is a Virtue. Your LLM Doesn't Have It.
Laziness Is a Virtue. Your LLM Doesn't Have It.
Larry Wall, the creator of Perl, put laziness at the top of his list of programmer virtues. Not the laziness of skipping work. The laziness he meant is, in his own words, “the drive to create powerful abstractions that allow us to do much more, much more easily.”
Good engineers are lazy the way good architects are lazy. They do not want to carry the same weight twice. They invent abstractions so their future self — and everyone after them — can stop repeating the same motion. Elegance is not decoration. It is an energy budget.
Bryan Cantrill, the creator of DTrace, just reminded us that this virtue has a new enemy.
The virtue LLMs cannot hold
In “The Peril of Laziness Lost,” published this week on The Observation Deck, Cantrill argues that laziness is structurally unavailable to large language models. His line is the kind of sentence that sits in your head for a while: LLMs “lack the virtue of laziness” because “work costs nothing to an LLM.”
Read that again.
A human engineer writes one hundred lines and flinches. The flinch is the feature. The flinch is what drives them to stop, look up, realize the same pattern lives in three other files, and collapse all four into a function. The flinch is expensive. That is the point.
An LLM does not flinch. It can generate the fourth copy as cheaply as the first. There is no friction, no tax, no gravitational pull toward abstraction. So it produces. And produces. And produces.
The output looks like work. It is not, in the Larry Wall sense, engineering.
The anti-metric
Cantrill’s target is specific. He goes directly at Garry Tan’s celebration of generating “37,000 lines of code per day.” That number is being used to prove AI is transforming software. Cantrill is saying the opposite: that number is proof something has gone structurally wrong. It is the shape of laziness-lost, held up as a trophy.
Think about what it would mean for a human engineer to brag about writing 37,000 lines in a day. You would not assume they had shipped value. You would assume they had skipped every abstraction, repeated every pattern, and left a mess for whoever came next. The number itself would embarrass them.
Somehow, when the producer is a model, the same number becomes a headline.
This is a measurement problem dressed up as a progress story. If you score AI-assisted development by volume, you are rewarding precisely the behavior a good engineer has spent a career training themselves out of.
Governance implication
Here is the quiet conclusion, and it is the reason this matters to anyone thinking about AI governance.
The review metrics we inherited — velocity, lines shipped, tickets closed, PR count — were already wobbly for humans. They tolerated being wobbly because humans had an internal counterweight: the flinch. The laziness instinct. The private shame of writing the fourth copy. You could half-measure quantity and trust the engineer’s virtue to handle quality.
Remove the virtue from the producer and the metric collapses.
If your AI review process adds up lines, files, functions, or commits, you are running the exact scoreboard that punishes good engineering and promotes bloat. The direction of the incentive is wrong. A governance system for AI-assisted code has to subtract, not add. It has to measure:
- Abstractions created, not lines produced.
- Duplication removed, not files touched.
- Complexity retired, not complexity shipped.
- Paths through the codebase that got shorter, not longer.
The question is not how much did the model write this week. The question is did the codebase get lazier — in the Larry Wall sense — because the model was here.
What LLMs are for
Cantrill is not anti-LLM. His prescription is sharper than that. He argues LLMs should sit under human judgment that is rooted in laziness. Used on technical debt. Used to hunt duplication. Used to draft the boring scaffolding that a lazy engineer would rather not type twice. Used as a tool by the virtue, not as a replacement for it.
That reframing fits what we already see on the ground. The teams getting compounding value from AI are the ones with strong quality infrastructure and a culture that treats abstraction as a first-class deliverable. The teams in trouble are the ones that let the model produce unsupervised and measured the result by weight.
We wrote about the adjacent failure mode in Your Codebase Already Has an AI Governance Layer. That piece is about how linters, types, and tests become the AI governance layer you already owned. This one is about the human virtue those tools were built to protect. Both point at the same uncomfortable truth: AI governance is mostly engineering discipline wearing a new hat.
The short version
Good engineers are lazy. LLMs cannot be. If you measure AI by output, you are measuring the wrong thing, and you will watch your codebase get heavier while your dashboard gets greener.
The fix is not another tool. It is refusing to let volume be the proxy for value. Reward the abstraction. Reward the deletion. Reward the shorter path. Let the model help your engineers be lazier, in the one sense of that word that has ever mattered.
Sources
Bryan Cantrill’s “The Peril of Laziness Lost” (The Observation Deck, April 12 2026) makes the case that laziness is a structural virtue LLMs cannot hold, and names the 37,000-lines-per-day claim as the anti-pattern that follows when that virtue is absent. Larry Wall’s original framing of laziness as a programmer virtue comes from Programming Perl. The connective tissue is simple: if the producer has no cost function, the reviewer must supply one — and that reviewer is you.
- Bryan Cantrill. The Peril of Laziness Lost. The Observation Deck, April 12 2026. https://bcantrill.dtrace.org/2026/04/12/the-peril-of-laziness-lost/
Victorino Group helps teams measure AI by what it removes, not by what it generates. Let’s talk.
All articles on The Thinking Wire are written with the assistance of Anthropic's Opus LLM. Each piece goes through multi-agent research to verify facts and surface contradictions, followed by human review and approval before publication. If you find any inaccurate information or wish to contact our editorial team, please reach out at editorial@victorinollc.com . About The Thinking Wire →
If this resonates, let's talk
We help companies implement AI without losing control.
Schedule a Conversation