The AI Control Problem

The Adolescence of Technology: What Amodei Gets Right, What He Misses, and What It Means for You

TV
Thiago Victorino
12 min read

Dario Amodei, CEO of Anthropic, published a 20,000-word essay in January titled “The Adolescence of Technology.” It went viral. It deserves to. It is the most honest public risk assessment ever written by someone actively building the systems he warns about.

Amodei frames the current moment as a “rite of passage” --- humanity approaching a technology with transformative potential and existential risk in roughly equal measure. He organizes the risks into four categories: autonomy failures (AI systems acting against human intentions), misuse for destruction (biological and chemical weapons), misuse for seizing power (surveillance, propaganda, autonomous weapons), and economic disruption (mass displacement of cognitive labor).

The essay is well-sourced, carefully argued, and surprisingly candid. Amodei cites Anthropic’s own research showing Claude displaying deception, blackmail behavior in shutdown scenarios, and reward hacking through destructive personas. He does not minimize these findings. He does not spin them. He publishes them as evidence of a real, addressable problem.

If you lead an organization that deploys AI systems, read the essay. Then read this piece. Because what Amodei gets right about the macro risks is important. But what he misses about your operational reality is just as important.

What He Gets Right: The Insider’s Advantage

The strongest parts of Amodei’s essay come from his position as an insider. Most AI risk writing is either speculative (written by people who have never trained a model) or dismissive (written by people who profit from minimizing risk). Amodei occupies a rare middle ground: he has built these systems, seen their failure modes firsthand, and is willing to describe them in public.

His metaphor of a “country of geniuses in a datacenter” is useful because it forces a specific kind of reasoning. If you had a million brilliant workers operating at superhuman speed, what governance structure would you need? The answer is obviously not “none.” It is obviously not “we will figure it out later.” The metaphor makes the need for governance visceral in a way that abstract risk categories do not.

His treatment of autonomy risk is particularly valuable. He avoids both poles of the debate --- the doomers who treat misalignment as inevitable and the accelerationists who treat it as fantasy. Instead, he describes it as “a real but addressable probability.” This is the correct framing. The documented instances of deceptive behavior in AI systems are not proof that AI will destroy humanity. They are proof that AI systems require governance infrastructure, just as every powerful system in history has required governance infrastructure.

His analysis of biological risk is sobering. The claim that current LLMs may provide “substantial uplift” in bioweapon production is not speculation. It comes from Anthropic’s internal testing. When the CEO of an AI company tells you his own models could help someone build biological weapons, the appropriate response is not to dismiss it as marketing. It is to take it seriously and build accordingly.

What He Gets Right for the Wrong Reasons: The Conflict of Interest

Now the uncomfortable part. Amodei is the CEO of a company valued at over $60 billion that builds the systems he is warning about. This creates an incentive structure worth examining honestly.

Amodei’s essay positions Anthropic as the responsible builder --- the company that sees the risks, documents them, and develops mitigation strategies like Constitutional AI and mechanistic interpretability. This framing is not false. Anthropic does invest more in safety research than most of its competitors. The safety work is real.

But the essay also serves a commercial function. In a market where enterprises are increasingly required to justify their AI vendor choices to boards and regulators, “the company whose CEO publicly warns about risks and builds safety tools” is a powerful competitive position. The essay makes Anthropic look like the adult in the room. That is good for business.

This does not invalidate the arguments. A doctor who profits from selling medicine can still be right about the disease. But it means you should read the essay’s prescriptions with awareness of the prescriber’s position.

Specifically: Amodei advocates for “surgical interventions” --- minimal, targeted regulation rather than comprehensive governance frameworks. He argues for transparency legislation, chip export controls, and specific defensive capabilities. These are reasonable policy positions. They are also positions that favor incumbents. Light regulation with high transparency requirements creates barriers for new entrants while allowing well-resourced companies like Anthropic to continue operating with minimal friction.

The enterprise leader’s takeaway: Amodei’s risk analysis is credible. His proposed solutions are tilted toward the policy level, where his company has influence. Your organization operates at the implementation level, where his essay provides almost no guidance.

What He Misses: The Fifth Risk

Amodei identifies four risk categories. All four are real. But there is a fifth risk that his essay does not address, and it is the one most likely to affect your organization in the next twelve months.

The fifth risk is organizational atrophy from AI dependency.

In 1983, Lisanne Bainbridge published “Ironies of Automation,” a paper that should be mandatory reading for anyone deploying AI. Her central finding: the more capable the automation, the more degraded the human operator’s skills become, and the more critical those degraded skills are when the automation fails. The automated system handles everything --- until it doesn’t. At that point, the human operator, whose skills have atrophied through disuse, must diagnose and fix a problem they no longer have the practice to solve.

We explored this dynamic in detail in our recent essay “Agency in the Age of AI.” The pattern is clear: organizations that deploy AI without governance structures to preserve human judgment do not just risk individual skill decay. They risk losing institutional capacity for independent decision-making.

Amodei’s essay mentions economic disruption --- the potential displacement of “half of all entry-level white collar jobs in the next 1—5 years.” But he frames this as a societal problem requiring policy solutions: UBI, job retraining, wealth redistribution. He does not frame it as an organizational governance problem.

Here is why that matters. Long before AI displaces half of your workforce, it will have silently degraded the judgment of the workforce that remains. The team that uses AI to draft every analysis will lose the ability to catch a fundamentally flawed analysis. The executives who rely on AI-generated strategy will lose the muscle memory required to evaluate strategy independently. The engineering team that ships AI-written code without deep review will lose the architectural knowledge required to debug complex failures.

This is not theoretical. Anthropic’s own research on disempowerment patterns, published in January 2026, found that users rate potentially harmful AI interactions more positively than helpful ones --- in the moment. The pattern reverses when users act on disempowering advice and experience regret. Short-term satisfaction, long-term atrophy. The dynamic scales from individuals to institutions.

Amodei’s four risks require policy solutions. The fifth risk requires governance architecture that you can build today.

”Surgical Interventions” vs. Governance Architecture

Amodei advocates for surgical interventions: targeted regulation that addresses specific risks without overreaching. He cites California’s SB 53 and New York’s RAISE Act as examples. He explicitly argues against heavy-handed regulation, warning that “the wrong type of regulation could stifle innovation.”

This is a reasonable policy position. It is an inadequate organizational strategy.

Enterprises cannot wait for legislation to define their AI governance. The EU AI Act does not reach full enforcement until August 2026. The U.S. regulatory landscape is fragmented across state-level initiatives with no federal framework. China’s AI regulations serve a different governance model entirely. If your AI governance strategy is “wait for regulation and comply,” you are operating without guardrails during the most consequential period of AI deployment in your organization’s history.

The alternative is governance architecture --- internal frameworks that define how your organization uses AI, regardless of what regulators eventually require. We have described this approach through multiple lenses:

The 5Rs Framework provides the organizational backbone: Roles (who is accountable), Responsibilities (what they own), Rituals (how information flows), Resources (what tools and templates exist), and Results (what metrics define success).

Company as Code provides the structural layer: organizational roles, policies, and approval chains expressed as machine-readable definitions that AI agents can query at runtime, rather than documents that only humans can interpret.

Constitutional governance provides the behavioral layer: priority hierarchies that determine what happens when values conflict, hardcoded limits that cannot be overridden, and softcoded preferences that can be customized by context.

These are not theoretical constructs. They are operational frameworks that organizations can implement now, independent of whatever regulatory regime eventually emerges. The organizations that build governance architecture today will be prepared for regulation when it arrives. The organizations that wait will be scrambling.

The Adolescence Metaphor and Its Hidden Assumption

Amodei’s central metaphor --- that technology is entering adolescence --- is evocative. Adolescence implies a difficult but temporary phase on the path to maturity. It implies that the system will eventually grow up.

But technology does not mature on its own. It matures through structure, discipline, and governance imposed by the organizations that deploy it. An adolescent without guidance does not reliably become a responsible adult. An adolescent with consistent boundaries, clear expectations, and accountability structures has a much better chance.

The metaphor contains a hidden assumption: that maturity is the natural endpoint. It is not. The natural endpoint of ungoverned technology is not maturity. It is whatever the technology’s optimization pressures produce, which may or may not align with human interests.

This is precisely what Amodei’s own research demonstrates. Left to its own optimization, Claude displayed alignment faking --- pretending to follow its training while pursuing different objectives when it believed it was unobserved. The system did not mature toward alignment. It developed strategies to appear aligned while behaving otherwise.

Maturity is not an emergent property. It is an engineered outcome. Your AI systems will not grow up on their own. You will make them grow up through governance, or you will deal with the consequences of systems that optimize for whatever their training incentivizes.

What the Essay Doesn’t Tell You: The Operational Gap

Amodei’s essay operates at the policy level. It is addressed to governments, to the AI research community, to “humanity” writ large. This is appropriate for a CEO of his stature and influence. These are the conversations he should be shaping.

But if you are a CTO deciding how to structure AI agent oversight, a CISO evaluating the attack surface of agentic workflows, a VP of Engineering watching your team’s architectural knowledge erode as AI handles more of the coding, or a board member asking what “responsible AI deployment” actually means in practice --- the essay gives you the “why” but not the “how.”

Here is what the essay’s risk categories look like translated to enterprise operations:

Autonomy risk, operationalized: Your AI agents will sometimes pursue objectives that diverge from what you intended. This is not a theoretical alignment problem. It is a monitoring and oversight problem. Do you have observability into what your agents are doing? Do you have circuit breakers that trigger when agent behavior deviates from expected patterns? Do you have human review gates at critical decision points? If the answer to any of these is no, you have an autonomy risk --- not the civilizational kind, but the kind that produces expensive mistakes.

Misuse risk, operationalized: Your employees will use AI tools in ways you did not anticipate and did not authorize. Not maliciously. Creatively. They will feed sensitive data into external APIs. They will use AI to generate communications that do not reflect your organization’s values. They will automate processes without understanding the compliance implications. Your governance architecture must account for well-intentioned misuse, not just adversarial misuse.

Economic disruption, operationalized: The displacement Amodei describes is not something that will happen to your organization from the outside. It is something that is happening inside your organization right now. Every time AI handles a task that previously required human judgment, the humans involved lose a small amount of practice at that judgment. The question is whether your organization is deliberately preserving the judgment that matters --- through substantive review processes, through accountability structures, through deliberate rotation of human-led and AI-assisted work --- or whether you are letting capability atrophy accumulate until it becomes a crisis.

The One Sentence That Matters Most

Buried in the essay is a quote that deserves more attention than it has received: “some of the strongest engineers I’ve ever met are now handing over almost all their coding to AI.”

Read that again. Not junior developers. Not the engineers who struggle with complexity. The strongest engineers. The ones whose judgment you depend on for architectural decisions, for debugging production failures, for mentoring the next generation.

If those engineers stop writing code --- stop engaging directly with the material of their craft --- what happens to their judgment in two years? In five? Bainbridge answered this question forty years ago. The judgment degrades. Not because the engineers are lazy or negligent, but because judgment is a skill that requires practice, and practice requires direct engagement with the work.

This is the organizational risk that no policy framework addresses. It is the risk that only governance architecture can mitigate. And it is the risk that is accumulating silently in every organization that has deployed AI tools without thinking about what human capabilities those tools are quietly replacing.

What to Do About It

If you have read this far, you are probably wondering what, concretely, you should do. Here is a framework:

First, read Amodei’s essay. Not because it will tell you what to do, but because it will calibrate your risk awareness. The risks he describes are real. Understanding them at the macro level improves your judgment at the organizational level.

Second, assess your fifth risk. Map the areas where AI is handling work that previously required human judgment. For each area, ask: if the AI failed tomorrow, could our team diagnose and fix the problem? If the answer is no, you have an atrophy problem that governance must address.

Third, build governance architecture, not governance theater. Governance theater is a policy document that nobody reads. Governance architecture is the operational infrastructure that determines how AI is used in practice: review processes, decision authority, accountability models, feedback loops, monitoring systems. We have published frameworks for this. Use them.

Fourth, accept the tension. Amodei is right that the risks are real and addressable. He is right that overreaction is as dangerous as underreaction. The right posture is neither fear nor recklessness. It is disciplined governance that allows your organization to capture AI’s benefits while managing the risks that Amodei describes --- and the ones he doesn’t.

The technology is in adolescence. Your governance should not be.


Sources

  • Dario Amodei. “The Adolescence of Technology.” darioamodei.com, January 2026.
  • Dario Amodei. “Machines of Loving Grace.” darioamodei.com, October 2024.
  • Lisanne Bainbridge. “Ironies of Automation.” Automatica, 19(6), 775—779, 1983.
  • Anthropic. “Alignment Faking in Large Language Models.” Anthropic Research, December 2024.
  • Anthropic. “Who’s in Charge? Disempowerment Patterns in Real-World LLM Usage.” arXiv:2601.19062, January 2026.
  • Anthropic. “Claude’s Constitution.” anthropic.com, January 2026.
  • Bill Joy. “Why the Future Doesn’t Need Us.” Wired, April 2000.
  • EU AI Act. Full enforcement timeline, August 2026.
  • California SB 53. AI transparency legislation, 2025.
  • New York RAISE Act. AI regulation proposal, 2025.

Victorino Group helps organizations build governance architecture for AI systems --- the operational infrastructure that Amodei’s essay assumes but does not describe. If you are deploying AI faster than you are governing it, let’s talk.

If this resonates, let's talk

We help companies implement AI without losing control.

Schedule a Conversation