Every Structural Governance Attempt for AI Labs Has Failed. The DeepMind Files Explain Why.

TV
Thiago Victorino
14 min read
Every Structural Governance Attempt for AI Labs Has Failed. The DeepMind Files Explain Why.
Listen to this article

Between 2015 and 2017, Demis Hassabis and his team at DeepMind attempted four distinct structural governance reforms to prevent their AI research from becoming a weapon of corporate monopoly. Every single one was killed.

Sebastian Mallaby’s account of “Project Mario,” published in March 2026 as an excerpt from The Infinity Machine, is the most detailed record we have of what happens when genuinely motivated people try to build governance structures around AI labs. The record is damning. Not because the people involved were bad actors, but because the dynamics they encountered are structural. They would defeat any team in any organization.

This matters beyond history. The pattern Mallaby documents is repeating right now, at every frontier AI lab. If you are designing governance for AI systems inside your organization, you need to understand why it keeps failing at the places with the most resources and motivation to get it right.

Four Attempts, Four Failures

Attempt 1: The AGI Safety Board (August 2015). Hassabis convened the first formal AGI safety board meeting at SpaceX headquarters. The meeting included Elon Musk, who had co-founded OpenAI months earlier, and Larry Page, whose Alphabet had acquired DeepMind in 2014. The meeting failed before any governance mechanism could be proposed. Musk and Page were personally at odds over AI philosophy. Musk believed AI posed existential risk. Page thought that position was “speciesist.” Personal tension between two billionaires prevented the conversation from reaching substance.

Attempt 2: The 3-3-3 Board (Late 2015). Following Alphabet’s restructuring, Hassabis proposed a “3-3-3 board” for DeepMind: three seats for DeepMind, three for Alphabet, three for independent directors. The structure would have given neither side unilateral control. It never reached formal negotiation. Alphabet’s leadership saw no reason to dilute their authority over a subsidiary they fully owned.

Attempt 3: The Spin-Out (First Half 2016). Hassabis met Larry Page four times to negotiate a formal spin-out. A term sheet was drafted. DeepMind would become an independent Alphabet “bet” with its own governance structure. On November 21, 2016, Google’s Chief Legal Officer David Drummond blocked the deal. The corporate lawyer had more operational power than the founder’s relationship with the CEO.

Attempt 4: The Global Interest Company (2017). This was the most ambitious proposal. A “company limited by guarantee” has no shares, pays no dividends, and operates bound by its charter. It is the cleanest structural separation between mission and profit. Reid Hoffman committed $1 billion to fund a DeepMind walk-away plan if Alphabet refused. Pichai countered by proposing to split DeepMind in two. In June 2017, DeepMind announced the “Global Interest Company” structure to its own staff. That same week, Google red-lined the proposal. The transformer paper, “Attention Is All You Need,” was published in the same month, and the commercial value of AI research became impossible for any parent company to release.

The Pattern: Why Governance Loses

Reading Mallaby’s account closely, the same dynamic repeats across all four attempts. It is not unique to DeepMind or Alphabet.

Power holders have no incentive to dilute power. This sounds obvious. It is not. Every governance proposal assumed that Alphabet’s leadership would accept constraints on their authority because the mission demanded it. But Alphabet owned DeepMind. They paid the bills. From their perspective, governance proposals were requests to give away control of an asset they had purchased. No amount of ethical argument changes the property calculus.

Operational control beats structural design. Hassabis had a direct relationship with Larry Page. He had billionaire backers. He had a willing team. None of it mattered when David Drummond, a corporate lawyer, decided the deal was not in Google’s interest. The person who controls the signature line controls the outcome. Governance proposals that require approval from the entity being governed are requests, not reforms.

Timing works against governance. Every month that DeepMind’s research became more commercially valuable, the case for structural independence weakened. The transformer paper was published during the final governance negotiation. The more valuable the asset, the tighter the parent holds it. Governance windows close as capability advances. This is the central paradox: the moment AI governance becomes most necessary is precisely the moment it becomes least achievable.

Personal dynamics override institutional design. The first safety board failed because Musk and Page could not be in a room productively. The OpenAI parallel is identical. In September 2017, Ilya Sutskever and Greg Brockman confronted Musk and Altman about control of OpenAI. Sutskever told Musk directly: “The current structure provides you with a path where you end up with unilateral absolute control over the AGI.” He then turned to Altman: “Is AGI truly your primary motivation? How does it connect to your political goals?” These were not structural problems. They were questions about individual ambition disguised as institutional form.

The Mirror Failure at OpenAI

Mallaby’s account makes the parallel explicit, and it is devastating for anyone who believes governance structure alone can solve the problem.

DeepMind wanted nonprofit governance layered on top of for-profit resources. They tried to create a mission-bound structure that could access Alphabet’s capital without being captured by Alphabet’s incentives.

OpenAI wanted the reverse: for-profit economics layered on top of nonprofit governance. They created a “capped profit” structure meant to attract investment while preserving the nonprofit’s mission control.

Both failed. DeepMind never achieved structural independence. OpenAI’s nonprofit board attempted to fire Sam Altman in November 2023 and was overridden within days by investor pressure, employee revolt, and Microsoft’s implicit veto. The board that was supposed to have ultimate authority discovered that authority evaporates when the people with economic leverage disagree.

Two labs. Opposite structural approaches. Same outcome: corporate power dynamics won.

Why “Company Limited by Guarantee” Was the Right Idea That Could Never Work

The most interesting detail in Mallaby’s account is the “company limited by guarantee” proposal. This is a British legal form where the entity has no shareholders, no equity, and no dividends. It operates solely to fulfill its charter. The entity’s governors cannot profit from its success.

This structure eliminates the property calculus that defeated every other proposal. If no one owns the company, no one has a financial interest in resisting governance constraints. The mission and the incentive structure point the same direction.

It was never implemented. Not because the structure was flawed, but because it required Alphabet to release DeepMind entirely. You cannot create a charterbound entity inside a profit-maximizing parent. The structural solution demanded the one thing no power holder will grant voluntarily: complete divestiture of a valuable asset with no compensation.

Hoffman’s $1 billion commitment to fund a walk-away was meant to solve this by making departure credible. But departure required people to leave, and people have mortgages, stock vesting schedules, and families. The collective action problem is real even when the financing is solved.

What This Means for Enterprise AI Governance

The DeepMind story is about a frontier lab, but the pattern applies to every organization deploying AI systems.

Governance imposed by the governed entity is not governance. If your AI governance board reports to the same executive who controls the AI budget, you have an advisory committee, not a governance body. As we explored in Anthropic and the Pentagon, even external governance pressure (from the U.S. military, no less) struggles against corporate incentive structures. Internal governance has even less leverage.

Governance must be designed before the asset becomes valuable. DeepMind’s best chance at structural independence was before the transformer, before AlphaGo, before the commercial potential of their research was obvious. Once the asset’s value was clear, every negotiation became a zero-sum contest over who controls the upside. The same is true inside enterprises. Build governance into AI programs at inception, when the systems are experiments. By the time they generate revenue, the political calculus shifts against constraint.

Separate the governance function from the governed function financially. The “company limited by guarantee” idea points toward the right principle even if the specific form is impractical for most organizations. Governance bodies that depend on the budget of the thing they govern will always be captured. Fund governance independently. Give it reporting lines that do not pass through AI program leadership.

Accept that governance is a power contest, not a design problem. The engineers and researchers at DeepMind designed four structurally sound governance proposals. They failed because governance is not an engineering challenge. It is a negotiation between parties with unequal power and misaligned incentives. Design better structures, by all means. But if you do not also build the political coalition to defend those structures when they are tested, the design is academic.

The Uncomfortable Conclusion

Mallaby’s account forces a conclusion that governance optimists (including me) would prefer to avoid.

Every structural governance attempt for AI labs has failed. Not some. Every one. Not because the structures were poorly designed. Because the power dynamics that governance is meant to constrain are the same dynamics that determine whether governance gets implemented.

The transformer was published the same month Google killed DeepMind’s last governance proposal. Capability does not wait for governance. It never has. The question for every organization building or deploying AI is not whether their governance structure is well-designed. It is whether their governance structure can survive the moment someone with more power decides it is inconvenient.

History says it cannot. The task is to prove history wrong. That starts with understanding, clearly and without illusion, why it has been right so far.


This analysis draws from Sebastian Mallaby’s Project Mario: Demis Hassabis and DeepMind (March 2026), an excerpt from “The Infinity Machine” published by Penguin Random House.

Victorino Group helps organizations design AI governance structures that survive contact with power dynamics. Let’s talk.

All articles on The Thinking Wire are written with the assistance of Anthropic's Opus LLM. Each piece goes through multi-agent research to verify facts and surface contradictions, followed by human review and approval before publication. If you find any inaccurate information or wish to contact our editorial team, please reach out at editorial@victorinollc.com . About The Thinking Wire →

If this resonates, let's talk

We help companies implement AI without losing control.

Schedule a Conversation