The Monetizable Spread: Why Open-Source Compression Makes Governance the Last Moat

TV
Thiago Victorino
7 min read
The Monetizable Spread: Why Open-Source Compression Makes Governance the Last Moat
Listen to this article

Dave Friedman coined a term that deserves to stick: the monetizable spread. It names something the AI industry has been dancing around for months. The distance between what frontier models can do and what open-source models can do is one number. The distance between what frontier models can charge for and what open-source gives away free is a different, smaller number. And it is shrinking faster.

Friedman’s distinction matters because the entire valuation thesis for closed-source AI companies depends on the second number, not the first.

Two Spreads, One Collapsing

Capability spread measures raw performance. Frontier models score higher on benchmarks. They handle harder tasks. They reason through longer chains. This spread can hold or even widen at the top. Breakthroughs at the frontier push the ceiling up.

Monetizable spread measures something else: the zone where customers will pay premiums. And this zone compresses from below.

Here is the mechanism. Open-source trailed state-of-the-art by roughly twelve months in late 2024, according to Epoch AI. That lag is now approximately three months. On MMLU, closed models scored 88% at the end of 2023 while open-source sat at 70.5%. The difference is now single digits.

Most enterprise tasks do not require frontier performance. The Anthropic Economic Index found that 36% of API usage goes to routine coding and math tasks. For these workloads, an open-source model running on your own infrastructure produces the same output as a frontier API call. The capability spread is irrelevant. Only the monetizable spread matters. And for routine tasks, it has already collapsed.

Friedman puts it precisely: “Revenue growth on a narrowing moat is a different asset than revenue growth on a widening one.”

The DeepSeek Efficiency Problem

DeepSeek V3 trained in 2.6 million GPU hours. Llama 3 required 30.8 million. That is a 12x efficiency improvement within a single generation of open-source models.

This number should worry anyone whose business plan depends on training cost as a barrier to entry. If a lab in China can match competitive performance at one-twelfth the compute cost, the capital moat around frontier AI is not structural. It is temporary. The next open-source release will be cheaper still.

The efficiency curve also changes the self-hosting equation. Enterprises that dismissed open-source deployment because of infrastructure costs need to recalculate. When the model requires 12x less compute, the total cost of ownership shifts dramatically.

Revenue Without Moats

Consider the current valuations. OpenAI sits at roughly $850 billion, trading at 30x revenue, with projected losses of $14 billion in 2026. Anthropic reached $380 billion on a $14 billion run rate, projecting free cash flow positive by 2027.

These are not small companies. They are growing fast. But Friedman asks the right question: what kind of growth is this?

Revenue growth built on a widening moat compounds into durable market position. Revenue growth on a narrowing moat requires constant reinvestment just to maintain the spread. Every dollar of revenue costs more to defend as open-source catches up.

Thirty-seven percent of organizations still use AI at minimal scale, according to Deloitte’s 2026 survey. The market is early. Revenue will grow. But growing revenue on a narrowing monetizable spread means the growth curve is borrowing from the future, not building toward it.

Why Governance Survives Compression

As we argued in The $500 Billion Question, governance is the last moat in AI. Friedman’s framework explains the mechanism with more precision than we offered.

When capability spread compresses, technical differentiation fades. A frontier model that scores 92% while open-source scores 89% cannot command a 5x price premium on performance alone. The customer’s question shifts from “which model is best?” to “which vendor can I trust in a regulated environment?”

That question leads to governance. Enterprise agreements. Safety certifications. Regulatory positioning. Audit trails. Compliance documentation. SOC 2 and ISO 27001. The infrastructure that tells a CISO and a general counsel: yes, you can deploy this.

Open-source models are powerful. They are also uninsured. No vendor stands behind them in a regulatory audit. No enterprise agreement covers liability when a self-hosted model produces harmful output. No safety certification comes with the download.

Friedman calls regulatory advantage a “political bet, not a technology bet.” He is right, but the framing is incomplete. For the enterprise buyer, regulatory readiness is not a bet at all. It is a procurement requirement. The choice between a frontier API with governance guarantees and an open-source model without them is not a technology decision. It is a risk decision. And risk decisions are where governance infrastructure becomes the product.

The Missing Variable: Switching Costs

Friedman’s analysis has one blind spot worth noting. He focuses on the spread between capability and monetization but underweights enterprise switching costs.

Deploying an open-source model is not downloading a file. It requires infrastructure provisioning, fine-tuning pipelines, evaluation frameworks, monitoring systems, and operational staff. An enterprise that has built its AI stack around a frontier vendor’s API has invested hundreds of engineering hours in integration, testing, and workflow design.

Switching to open-source to save on API costs means rebuilding that stack. For many organizations, the switching cost exceeds years of API premium payments. This does not invalidate Friedman’s thesis. It slows the compression timeline. The monetizable spread narrows, but friction keeps it from reaching zero as fast as pure capability numbers suggest.

What the Compression Timeline Means

If you are running an enterprise AI program, the monetizable spread framework changes three decisions.

Model selection becomes a governance decision, not a performance decision. When mid-tier models match flagships at a fraction of the cost, and open-source approaches both, the differentiator is the governance wrapper around the model. Evaluate vendors on compliance infrastructure, not benchmark scores.

Build vs. buy calculations need updating. The 12x efficiency improvement from DeepSeek V3 means self-hosting economics have changed since your last analysis. If your organization has the engineering talent and governance maturity to operate open-source models, the cost advantage is now large enough to justify the investment. If you lack governance maturity, the API premium is paying for risk management you cannot yet provide yourself.

Valuation risk affects vendor stability. If your AI vendor is valued at 30x revenue on a narrowing moat, consider what happens when that multiple corrects. Vendor stability is a governance concern. Organizations building mission-critical workflows on AI platforms should evaluate their vendor’s unit economics, not just their model’s benchmark scores.

The monetizable spread is compressing. The question is not whether your AI vendor’s model is better than the open-source alternative. The question is whether the premium you pay buys governance you cannot build yourself.

For most enterprises today, it does. That is the moat. Not the model.


This analysis synthesizes Closed Source vs Open Source AI: A Cage Fight Few People Understand (March 2026), with data from Epoch AI and the Anthropic Economic Index.

Victorino Group helps enterprises navigate AI economics and governance strategy. Let’s talk.

All articles on The Thinking Wire are written with the assistance of Anthropic's Opus LLM. Each piece goes through multi-agent research to verify facts and surface contradictions, followed by human review and approval before publication. If you find any inaccurate information or wish to contact our editorial team, please reach out at editorial@victorinollc.com . About The Thinking Wire →

If this resonates, let's talk

We help companies implement AI without losing control.

Schedule a Conversation