The AI Control Problem

Agency in the Age of AI: Why the Real Risk Isn't Intelligence

TV
Thiago Victorino
8 min read

Dion Lim published a piece this week arguing that IQ and EQ are no longer sufficient. The AI era, he says, demands a new metric: AQ, or Agency Quotient --- the capacity to manifest intentions and actualize goals.

The concept is not new. Psychologists have been studying this under different names for decades: self-efficacy (Bandura, 1977), locus of control (Rotter, 1966), grit (Duckworth, 2007), self-determination theory (Deci and Ryan, 1985). What Lim calls “Agency Quotient” is a rebranding of well-established research into a catchy framework.

That is not a criticism. Sometimes old ideas need new packaging to reach new audiences. And Lim’s central observation is worth engaging with, not because the framework is novel, but because the problem it points to is urgent.

The interesting line in his article is not the definition of AQ. It is this: “the tool rewarding agency quietly erodes it.”

That sentence deserves unpacking. Because it describes the central governance challenge of every organization deploying AI today.

The Automation Paradox Is Not New

In 1983, Lisanne Bainbridge published a paper called “Ironies of Automation” that should be required reading for anyone deploying AI systems. Her argument: the more advanced the automation, the more critical --- and more degraded --- the human operator’s contribution becomes.

Automated systems handle routine operations well. But when they fail, they fail in complex ways that require exactly the judgment and situational awareness that the automation has been quietly eroding through disuse. The human operator, who has been passively monitoring a system that handles everything, is now expected to diagnose and fix a problem they have no recent practice solving.

Bainbridge was writing about industrial control systems. But the dynamic applies precisely to AI in organizations.

Consider a team that uses AI to draft every strategic document, generate every analysis, write every recommendation. The AI handles these tasks competently. The team reviews and approves. Over months, the team’s direct engagement with the underlying data, the messy reasoning, the hard trade-offs --- all of it atrophies. The team becomes editors of AI output rather than authors of strategy.

Then something goes wrong. The AI produces a plausible but fundamentally flawed analysis. The team, having lost the muscle memory of deep analytical work, approves it. Not because they are negligent. Because the skill required to catch the error has decayed through disuse.

This is not a hypothetical. It is the documented pattern of every automation domain Bainbridge studied. AI does not change the pattern. It accelerates it.

Lim’s AQ Framework, Honestly Assessed

Lim structures AQ around three phases: forming intention, taking action, and closing the loop. He illustrates failures with historical examples --- Tesla as a brilliant inventor who could not ship, Chamberlain as a leader whose actions were ineffective, Brutus as a strategist who won the battle but lost the war.

The framework has pedagogical value. It gives people a vocabulary for thinking about execution, which most frameworks neglect in favor of vision and strategy. Credit where it is due: most business writing treats agency as binary --- you either have it or you do not. Lim’s phased model acknowledges that agency can fail at different stages, which is more useful.

But we should be honest about what this is and is not.

The term “AQ” is already in use across multiple domains. Adaptability Quotient appears in organizational psychology. Adversity Quotient has its own assessment industry. The Autism-Spectrum Quotient is a clinical instrument. Adding another “AQ” to the landscape creates confusion, not clarity.

More importantly, Lim references Reid Hoffman’s book “Superagency” (2025) as supporting evidence for individual AQ. But Hoffman’s argument is about collective empowerment through AI --- how groups and institutions can develop what he calls “superagency” by leveraging AI systems together. It is a governance argument, not an individual performance argument. The citation misrepresents the source.

None of this invalidates the core observation. The core observation --- that AI tools can erode the very human capabilities that make them valuable --- is correct and important. It just deserves more rigorous framing than a new acronym.

The Real Problem: Organizations Without Governance Become Passive Consumers

Here is where the conversation gets interesting for anyone building or deploying AI systems.

Lim uses the Wall-E analogy: humans in the film become passive consumers, carried around on hover-chairs while robots handle everything. It is a vivid image. But the organizational version is more insidious than the individual one.

When an individual becomes passive, the consequences are personal. When an organization becomes passive, the consequences are structural.

An organization that deploys AI without governance structures --- without clear decision-making authority, without review processes, without accountability for AI-generated outputs --- does not just risk individual skill decay. It risks losing its capacity for independent judgment as an institution.

We see early signs of this already. Teams that cannot explain why their AI recommended a particular vendor. Executives who approve AI-generated strategies without understanding the assumptions baked into the analysis. Engineering teams that ship AI-written code without the test infrastructure to validate it.

This is not an intelligence problem. It is a governance problem. The organization has the tools. It has the output. What it lacks is the structure to maintain human authority over machine-generated decisions.

From “Vibe Coding” to “Vibe Managing”

We wrote recently about the distinction between vibe coding and agentic engineering --- the difference between passively accepting AI output and deliberately directing it with specs, tests, and review processes.

The same distinction applies beyond software. Call it “vibe managing” --- the practice of prompting AI, accepting its output, and acting on it without the friction of deep review. It feels productive. The output looks professional. The decisions seem reasonable.

But vibe managing is to organizational leadership what vibe coding is to software engineering: it produces output without accountability. It generates artifacts without understanding. It creates the appearance of agency while quietly surrendering the substance of it.

The antidote is the same in both cases: governance. Not governance as bureaucracy --- paperwork and approval chains that slow everything down. Governance as the deliberate preservation of human judgment in a system that increasingly makes human judgment feel unnecessary.

Specifically:

Decision authority must be explicit. For every AI system in your organization, someone must own the decision of whether to act on its output. Not “the team reviews it.” A named person who is accountable for the outcome. Diffuse responsibility is no responsibility.

Review must be substantive, not performative. Glancing at an AI-generated analysis and clicking approve is not review. Substantive review means engaging with the reasoning, challenging the assumptions, and being willing to reject output that looks polished but rests on faulty logic. This requires maintaining the skill to do so, which requires practice, which requires not delegating everything to AI.

Feedback loops must close. Lim’s third AQ phase --- closing the loop --- is the one organizations most consistently fail at. Did the AI-recommended strategy produce the expected results? Did the AI-generated code perform in production? Did the AI-drafted analysis hold up under scrutiny? Without closed feedback loops, organizations cannot learn. They can only repeat.

What U.S. Steel Teaches About Scale Without Agency

Lim opens his piece with U.S. Steel’s founding in 1901 as a case study in agency, though he understates the scale. The company employed 168,000 workers and generated $423 million in revenue by 1902 --- the first corporation to exceed a billion-dollar market capitalization. (Nasaw, Andrew Carnegie, Penguin, 2006.)

The relevant lesson is not J.P. Morgan’s agency in creating the company. It is what happened next. U.S. Steel, despite its massive scale and resources, gradually lost its competitive edge. By the mid-twentieth century, it was being outmaneuvered by smaller, more agile competitors. The agency that built the institution was not maintained by the institution. Scale created distance between decision-makers and operations. Bureaucracy replaced judgment. The organization became a passive steward of its own success.

This is the risk for organizations that scale AI without governance. The initial deployment is an act of agency --- deliberate, strategic, purposeful. But without the governance to maintain active human engagement, the organization drifts toward passive consumption of AI output. The capability grows. The judgment shrinks.

The Governance Imperative

The conversation about AI in organizations is dominated by capability questions. What can it do? How fast? How cheap? How accurate?

These are the wrong questions. Or rather, they are the second-order questions. The first-order question is: does your organization have the governance infrastructure to remain an active agent in its own decisions as AI handles more of the work?

If the answer is no, then more AI capability makes the problem worse, not better. A more powerful tool in the hands of a passive organization produces more consequential errors with less human oversight. The automation paradox, scaled to the enterprise.

If the answer is yes --- if you have clear decision authority, substantive review processes, closed feedback loops, and deliberate investment in maintaining human judgment --- then AI becomes what it should be: an amplifier of agency, not a substitute for it.

The distinction Lim is reaching for is real. The people and organizations that thrive with AI will be the ones that use it actively, critically, and with governance structures that prevent the comfortable slide into passive consumption.

The ones that do not will look exactly like Bainbridge predicted forty years ago: operators who have been lulled into passivity by a system that handles everything --- right up until the moment it does not.


Sources

  • Dion Lim. “Why IQ and EQ Aren’t Enough Anymore. The Age of AI Demands AQ.” CEO Dinner Insights (Substack), February 10, 2026.
  • Lisanne Bainbridge. “Ironies of Automation.” Automatica, 19(6), 775—779, 1983.
  • Albert Bandura. “Self-efficacy: Toward a Unifying Theory of Behavioral Change.” Psychological Review, 84(2), 191—215, 1977.
  • Angela Duckworth. Grit: The Power of Passion and Perseverance. Scribner, 2016.
  • Edward Deci and Richard Ryan. Intrinsic Motivation and Self-Determination in Human Behavior. Springer, 1985.
  • Julian Rotter. “Generalized Expectancies for Internal Versus External Control of Reinforcement.” Psychological Monographs, 80(1), 1—28, 1966.
  • Reid Hoffman. Superagency: What Could Go Right with AI. Crown Currency, January 2025.
  • David Nasaw. Andrew Carnegie. Penguin, 2006.

Victorino Group helps organizations build the governance infrastructure that preserves human agency as AI capability scales. If your AI deployment is growing faster than your governance, let’s talk.

If this resonates, let's talk

We help companies implement AI without losing control.

Schedule a Conversation