The AI Control Problem

The $3 Trillion Stress Test Nobody Designed

TV
Thiago Victorino
8 min read

On February 23, 2026, an independent research piece titled “The 2028 Global Intelligence Crisis” did something unusual: it moved markets. Software stocks — CrowdStrike, DocuSign, Atlassian, ServiceNow, Workday — cratered within hours. The tweet announcing the research attracted 4.5 million views. Not because it predicted something new, but because it asked a question nobody wanted to hear: What if our AI bullishness continues to be right — and what if that’s actually bearish?

The same day, Tom Tunguz at Theory Ventures published his own analysis. SpaceX is targeting a $1.5 trillion valuation. OpenAI is aiming for $1 trillion. Anthropic sits at $380 billion. Combined: roughly $2.9 trillion in new market capitalization looking for a home.

The entire US IPO market raised $469 billion from 2016 through 2025. A decade of capital formation. These three companies alone would require more than that in float allocation at standard IPO percentages.

This isn’t a market event. It’s a structural mismatch.

The Float Problem Is a Governance Problem

When Facebook went public in 2012, it offered 15% of shares to the market. Google offered 19%. Alibaba offered 15%. These are standard numbers — enough liquidity for price discovery, enough retention for founder control.

Apply those percentages to the AI trio and you need $432 to $576 billion in new capital. That’s not going to happen. The most likely outcome: minimal floats of 3-8%, creating artificial scarcity that inflates prices, followed by gradual dilution as insiders sell.

This matters beyond the stock market because of one mechanism: passive indexing. Index funds now manage approximately $20 trillion in assets. The S&P 500 requires 50% public float for inclusion. As these AI companies slowly meet that threshold, passive funds must buy — which means they must also sell existing mega-cap holdings to make room.

This is not speculation. It’s mechanical. The rebalancing math forces selling pressure on Apple ($3.4T), Microsoft ($3.1T), NVIDIA ($2.8T), and others. Not because anything changed about those companies, but because the index composition shifted.

The capital market is a governance system. When that system wasn’t designed for the speed and scale of a transition, it doesn’t fail gracefully. It creates cascading effects that nobody architected and nobody controls.

SaaS as Canary

The Citrini Research scenario zeroes in on a specific inflection point: 2027 enterprise contract renewals. Their thesis is that AI adoption reaches a threshold where customers can credibly threaten to replace SaaS platforms with internally built AI alternatives. Not because the alternatives are better — but because the negotiating leverage shifts.

ServiceNow, Salesforce, Workday — the “systems of record” — face a new conversation. When a customer walks into a renewal meeting and says “we built a prototype that replaces 60% of your platform’s functionality in three months,” the pricing power inverts even if the customer never actually deploys that prototype.

The authors are careful to call this a scenario, not a prediction. That intellectual honesty is worth noting. But the market didn’t treat it as a scenario — it treated it as a signal. Software stocks dropped on a thought experiment.

What made the research compelling enough to move markets was not the specific predictions. It was the structural observation: an industry whose investment thesis is built on AI success hasn’t stress-tested what that success means for the rest of the portfolio.

The Missing Layer

Every conversation about AI governance focuses on the same set of concerns: model safety, data privacy, regulatory compliance, bias mitigation. These are important. They are also insufficient.

The $3 trillion stress test reveals a layer that most governance frameworks ignore entirely: financial governance of the AI transition. Not the AI models themselves, but the economic structures that surround them.

Questions most organizations haven’t asked:

Portfolio concentration risk. How much of your investment exposure — directly or through index funds — is concentrated in companies whose valuations depend on AI delivering transformative returns? What happens to that exposure if AI delivers those returns but the value accrues to different companies than expected?

Vendor dependency during transition. If your SaaS vendors face pricing pressure from AI alternatives, what happens to the products you depend on? Do they cut R&D investment? Do they get acquired? Does the platform you built your operations around still exist in three years?

Capital market access. If $3 trillion in AI IPOs absorbs a disproportionate share of available capital, what happens to your own capital needs? Companies planning debt offerings, equity raises, or acquisitions will compete for attention in a market staring at SpaceX and OpenAI.

Forced rebalancing exposure. If you hold index funds — and most institutional investors do — your portfolio will mechanically shift toward AI companies as they enter the S&P 500. You didn’t choose this allocation. It was chosen for you by index construction rules written before AI companies existed at this scale.

These are not technical questions about AI. They are governance questions about organizational readiness for a market-level transition.

Scenario Planning, Not Prediction

The Citrini piece was careful about this distinction, and we should be too. Left-tail risk scenarios are not forecasts. They are stress tests — ways to probe whether your assumptions hold under conditions you haven’t experienced.

The value of “The 2028 Global Intelligence Crisis” isn’t that it’s right. It’s that 4.5 million people read it and most of them hadn’t done any scenario planning of their own. The market moved not because the analysis was novel, but because it articulated risks that had been systematically ignored.

Saudi Aramco’s $1.7 trillion IPO in 2019 provides some comfort — the market can absorb massive single events. These three IPOs won’t happen simultaneously. Secondary markets have already priced in significant capital. The rebalancing will happen gradually, not in a day.

But “it probably won’t be catastrophic” is not a governance position. It’s a hope.

What Financial AI Governance Looks Like

The organizations that will navigate this transition successfully are the ones that treat AI not just as a technology adoption challenge, but as a financial governance challenge. That means:

Stress-testing AI investment assumptions. Not “will AI work?” but “what happens to our operations, our vendors, our capital structure, and our portfolio when AI works at scale for everyone simultaneously?”

Mapping second-order dependencies. Your organization might have perfect AI governance. But if your primary SaaS vendor faces an existential pricing crisis, your governance didn’t anticipate the right threat.

Building scenario libraries. The Citrini piece is a single scenario. Organizations need a set of them — including scenarios where AI delivers exactly what was promised, and that’s the problem.

Governing portfolio exposure. Institutional investors need frameworks for managing the mechanical effects of index rebalancing driven by AI IPOs. This isn’t active stock-picking. It’s governance of passive exposure to a structural transition.

The traditional AI governance stack — ethics, safety, privacy, compliance — addresses the question “is our AI behaving correctly?” The financial governance layer addresses a different question: “is our organization positioned for what happens when everyone’s AI behaves correctly?”

The $3 trillion stress test nobody designed is already running. The question is whether your organization is watching the results.

If this resonates, let's talk

We help companies implement AI without losing control.

Schedule a Conversation