- Home
- The Thinking Wire
- The Learning Curve Tax: What Anthropic's Own Data Reveals About AI Productivity
The Learning Curve Tax: What Anthropic's Own Data Reveals About AI Productivity
Anthropic just released its third Economic Index report. One million conversations. Every finding at p<0.001. The dataset is unprecedented: first-party usage data from the company building the model, not a survey of what people say they do with AI, but a measurement of what they actually do.
The headline numbers look like progress. Usage is diversifying. The top ten tasks dropped from 24% to 19% of total conversations between November 2025 and February 2026. Personal use grew from 35% to 42%. Business sales and automated trading workflows at least doubled their share.
But two findings buried in the report tell a different story.
The tenure effect
Users who have been on Claude for six months or longer achieve a 10% higher success rate than new users. They also tackle tasks requiring approximately one additional year of education for every year of platform tenure.
Ten percent sounds modest. It is not. In a population of millions of users, a 10% success differential between experienced and inexperienced users represents an enormous amount of wasted compute, wasted time, and wasted organizational investment. Every new employee, every new team, every new department that adopts AI starts at the bottom of that curve.
This is not a technology problem. It is a learning curve problem. And learning curves have a well-understood property: they are expensive to climb and organizations climb them slower than individuals.
As we documented in The AI Adoption Spectrum, OpenAI’s own data shows a 6x productivity gap between frontier workers and average users. Anthropic’s tenure data explains part of the mechanism. The gap is not just about willingness to adopt. It is about accumulated skill that takes months to develop and that organizations have no systematic way to transfer.
The value decline
Average task value dropped from $49.30 to $47.90 in hourly wage equivalent. The education requirement dropped from 12.2 to 11.9 years. Users are bringing AI to simpler tasks.
This is not necessarily bad. Broader adoption across lower-complexity tasks might signal healthy democratization. But combined with the tenure effect, it reveals a pattern: experienced users move toward harder, higher-value tasks while the growing mass of new users pulls the average down.
The implications for enterprise adoption are uncomfortable. When your organization rolls out AI tools, the aggregate metrics will show declining average value per interaction. Your dashboards will look like AI is getting less useful over time. The reality is more nuanced: a small group of experienced users is extracting increasing value while everyone else is still learning which prompts work.
This mirrors what the Federal Reserve found and we analyzed in The 2% Problem: aggregate productivity data masks enormous variation between organizations and between individuals within organizations. Anthropic’s data adds temporal granularity. The variation is not static. It evolves as users accumulate experience, and the rate of that accumulation is highly uneven.
The API migration signal
Coding tasks are migrating from Claude.ai to the API. Up 14% on API, down 18% on Claude.ai. Opus usage overall: 51%. The API Opus coefficient is 2.8 percentage points per $10 wage increase, versus 1.5 for Claude.ai.
Translation: sophisticated users doing high-value work are moving to programmatic access. They are building workflows, integrations, and automated pipelines. They are not chatting with Claude. They are embedding Claude into systems.
This is the tenure effect manifested as infrastructure. Experienced users do not just get better at prompting. They change the interface entirely. They move from conversation to orchestration. And when they do, they disproportionately choose the most capable model and pay more for it.
Organizations that treat AI adoption as “give everyone a chat interface” are optimizing for the bottom of the learning curve. The users who generate the most value have already left that interface behind.
PyPI’s silence
Now layer in the Answer.AI analysis. Alexis Gallagher and Rens Dimmendaal from Jeremy Howard’s research lab analyzed every package on PyPI — roughly 800,000 total — looking for the production surge that should accompany the AI revolution.
They did not find it.
New packages per month: 5,000 to 15,000. No inflection point after ChatGPT’s launch. The line is flat. AI packages show higher release velocity (20-26 median releases per year for popular packages versus 10 for non-AI), but the creation rate tells the real story. The AI-to-non-AI creation ratio shifted from 6:1 in 2021 to under 2:1 in 2024. AI packages are being created at a relatively declining rate compared to the broader ecosystem.
The authors propose two hypotheses. The “AI Skill Issue”: building production AI applications is genuinely harder than expected, requiring expertise that most developers lack. And “Money and Hype”: investment capital flooded into AI packages during the boom, and what we are seeing now is normalization after the gold rush.
They favor the latter. I think both are true, and Anthropic’s tenure data explains why.
The convergence
Here is what these two datasets reveal when read together.
Anthropic shows that individual AI skill takes months to develop, produces measurable performance differences, and manifests as fundamentally different usage patterns (API versus chat, Opus versus cheaper models, complex versus simple tasks). The learning curve is real, steep, and consequential.
Answer.AI shows that the expected production output of all this AI investment — measured in the most concrete possible way, actual shipped software packages — has not materialized at scale. The curve from investment to production output is flat.
The gap between these two observations is organizational learning. Individual users are climbing the curve. The PyPI data suggests that organizations, on aggregate, are not converting that individual learning into production output.
This is not a new pattern. As we explored in The Institutional AI Gap, individual productivity gains consistently fail to translate into organizational productivity gains without deliberate governance infrastructure. Anthropic’s data now provides first-party evidence for the mechanism: the learning curve is real, it takes months, and organizations have no systematic way to accelerate it.
The geographic convergence revision
One detail in the Anthropic report deserves attention. The previous Economic Index estimated that US geographic convergence in AI usage patterns would take 2-5 years. The new report revises that to 5-9 years.
That is not a minor correction. It nearly doubles the timeline. And it means the inequality effects of AI adoption — concentration among high-education, high-income, computer-intensive occupations — will persist longer than initially projected. Anthropic frames this explicitly as a governance concern, noting that economic monitoring should be treated as governance infrastructure.
They are right. As we documented in The AI Intensity Trap, Berkeley research shows that AI-driven work intensification falls disproportionately on already-overworked knowledge workers. The extended convergence timeline means this pattern has more time to entrench before broader adoption dilutes the effect.
What this means for enterprises
Three implications.
First, budget for the learning curve. The 10% tenure effect means your AI ROI calculations are wrong if they assume immediate productivity gains. Build six-month ramp periods into your adoption models. Expect the first quarter to show negative returns as users learn what works.
Second, measure cohorts, not averages. Aggregate metrics will show declining task value as adoption broadens. This is a feature, not a bug, but only if you are also tracking power user cohorts separately. The experienced users pulling toward the API are your leading indicators. The average is a lagging indicator that will mislead you.
Third, build transfer mechanisms. The tenure effect implies that AI skill is learnable but not automatically transferable. Organizations that create systematic ways to transfer prompting patterns, workflow designs, and integration architectures from experienced users to new users will climb the curve faster. This is a governance function, not a training function. It requires documentation, review processes, and institutional memory — the same infrastructure you build for any other critical organizational capability.
The learning curve tax is real. Anthropic measured it. PyPI confirmed its consequences. The question is not whether your organization will pay it. The question is whether you will pay it deliberately, with governance infrastructure that accelerates learning, or accidentally, wondering why the productivity gains never materialize.
This analysis synthesizes Anthropic’s Economic Index: Learning Curves (March 2026) and Answer.AI’s So Where Are All the AI Apps? (March 2026).
Victorino Group helps enterprises navigate the AI productivity gap with governance frameworks that account for organizational learning. Let’s talk.
All articles on The Thinking Wire are written with the assistance of Anthropic's Opus LLM. Each piece goes through multi-agent research to verify facts and surface contradictions, followed by human review and approval before publication. If you find any inaccurate information or wish to contact our editorial team, please reach out at editorial@victorinollc.com . About The Thinking Wire →
If this resonates, let's talk
We help companies implement AI without losing control.
Schedule a Conversation