Builders, Shippers, Coasters: How 900 Engineers Actually Pay for AI in 2026

TV
Thiago Victorino
8 min read
Builders, Shippers, Coasters: How 900 Engineers Actually Pay for AI in 2026
Listen to this article

The cleanest dataset on engineering AI spend in 2026 did not come from a vendor whitepaper or a McKinsey deck. It came from Gergely Orosz polling more than 900 of his readers, engineers, engineering managers, CTOs, about what they actually pay, what they actually hit limits on, and what their finance teams actually push back against.

The numbers are uncomfortable for almost everyone. They are most uncomfortable for the executives who still believe an AI tool seat is a single line item with a single ROI.

The cost reality, in three uncomfortable numbers

Enterprise max plans now run $100 to $200 per month per engineer. Individual subscriptions hover around $20 a month. Roughly 5% of engineers maintain separate work and personal subscriptions because their employer’s plan does not cover what they actually use.

About 30% of engineers hit their usage limits. They run out of tokens, they run out of requests, they run out of whatever the vendor is metering this quarter. Roughly 20% report deliberately staying under the line, by switching tools, upgrading, or moving to API pricing where the meter at least runs honestly.

UK and EU finance teams push back hard at $30 to $50 per month per engineer. One CEO publicly questioned £25 a month for a 10-person startup. Orosz’s respondents in the US describe their employers as “more comfortable with investing first and measuring impact later.” The geographic split is not subtle. It is a different theory of what engineering tooling is worth.

Three numbers, one story: the price of an AI-equipped engineer is no longer a stable line item, the meter is running on multiple axes, and the people writing the checks are starting to read the meter.

The archetype that the cost data hides

Here is the part of Orosz’s survey that breaks every spend dashboard I have looked at this year. He asked engineers, in their own words, who is benefiting from AI tooling and who is not. Three archetypes emerged with enough consistency to name.

Builders are, in his respondents’ language, “those who care about quality, good architecture, following good coding practices.” They are the engineers your principal engineer recruits to fix the systems nobody else wants to touch. They struggle most with AI code review. Several report something closer to identity loss than productivity gain, when the model writes the code, the part of the job they recognized as theirs gets harder to locate.

Shippers are the ones focused on “outcomes for a product, features, testing, and experimenting with users.” They are the most enthusiastic adopters in the survey. They are also, by their own admission, the ones accumulating tech debt fastest, and the ones most likely to “build the wrong things” because the model removed the friction that used to force a second thought.

Coasters are described as “engineers who are not considered particularly good or great engineers, but they get the work done.” They uplevel the fastest in raw output. They also generate the most slop, code that compiles, ships, and quietly costs the team weeks of cleanup six months later.

Three archetypes. One tool. Three completely different ROI curves.

Why the seat-cost question is the wrong question

If you bought a $200-a-month max plan for a Builder, you bought identity friction and slower review cycles. If you bought it for a Shipper, you bought velocity and a future tech-debt invoice. If you bought it for a Coaster, you bought visible output and an invisible quality problem your QA team is going to inherit.

The seat costs the same $200. The realized value is in three different currencies, on three different timelines, with three different failure modes.

This is what the spend dashboards miss. They aggregate seats. They average usage. They cannot tell you, in your own data, that the Shipper who hits her limit every Tuesday is generating revenue and that the Coaster who hits his limit every Tuesday is generating tickets. Both engineers look identical in the procurement view. They are not identical in the outcome view, and the gap between those two views is where unmanaged AI spend lives.

The 30% hitting usage limits is the most cited statistic in the survey. The more interesting question is which 30%. A Builder hitting limits because she is reviewing every diff carefully is signal. A Shipper hitting limits because he is iterating on the wrong feature is risk. A Coaster hitting limits because the tool is now writing most of his commits is something you need to know before the next performance review.

The geography is also archetype data

The UK and EU finance push-back at $30 to $50 a month is not stinginess. It is a different theory of evidence. European procurement asks the question that US procurement is still deferring: show me the realized value before I sign the renewal.

US “invest first, measure later” works in a market where the model upgrades every quarter and the productivity narrative is loud enough to drown out the unit economics. It works less well when 30% of your engineers are hitting usage limits, when 5% are paying out of pocket to fill the gap, and when your finance team has started reading the survey data their European peers have been reading for a year.

The repricing pressure is going to arrive in the US too. It always does. The companies that will survive that pressure are the ones who can answer the archetype question before it is asked.

What to measure instead

The recommendation is small and unsexy: stop treating AI spend as a per-seat cost question. Treat it as an archetype-effectiveness question.

For each engineer holding a paid AI seat, you should be able to answer four things in your own data. Which archetype is this engineer operating as on this codebase, this quarter? A Builder on the platform team is a different investment than a Shipper on a feature squad. What is their realized output pattern with the tool, velocity, quality, rework? Not the vendor’s productivity dashboard. Your data. Where are the limit hits clustering, and what work is being done when they cluster? The 30% number is meaningless until you know what got produced inside it. What is the trailing six-month cost of cleanup behind the work the tool produced? Slop is invisible until you measure it. Then it becomes the largest line item on the page.

These are not vendor metrics. No AI tool ships them in its admin console. They are measurement-layer questions, and they are the only questions that turn $200 a month per engineer from a line item into a decision.

The honest framing for the next renewal cycle

Orosz’s survey is one paragraph the labs do not want anyone to memorize: “engineers in the US are more comfortable with investing first and measuring impact later.” Read it again, slowly. That sentence is the entire procurement strategy of multiple AI vendors right now. It is also the strategy with the shortest remaining shelf life.

The companies that renew confidently in 2027 will be the ones who built the archetype-aware measurement layer in 2026. The companies that cut AI lines to the bone in 2027 will be the ones who kept treating AI spend as a uniform per-seat purchase while the engineers holding those seats lived in three completely different economies.

Builders, Shippers, Coasters. Same tool. Three businesses underneath the invoice. Pick which one you are paying for, on purpose, before the renewal arrives.


This analysis is grounded in The Impact of AI on Software Engineers in 2026: Key Trends (Gergely Orosz / The Pragmatic Engineer, April 2026).

Victorino Group helps engineering leaders measure tool effectiveness by archetype, not just by spend. Let’s talk.

All articles on The Thinking Wire are written with the assistance of Anthropic's Opus LLM. Each piece goes through multi-agent research to verify facts and surface contradictions, followed by human review and approval before publication. If you find any inaccurate information or wish to contact our editorial team, please reach out at editorial@victorinollc.com . About The Thinking Wire →

If this resonates, let's talk

We help companies implement AI without losing control.

Schedule a Conversation