Three Names You Couldn't Say Last Month. You'll Need Them This Quarter.

TV
Thiago Victorino
6 min read
Three Names You Couldn't Say Last Month. You'll Need Them This Quarter.
Listen to this article

Most AI governance arguments fail at the first sentence. Not because the argument is wrong. Because the words are wrong. Boards reach for “risk” when they mean “delegated accountability”. Architects say “hallucination” when they mean “miscalibrated confidence”. Engineers say “human in the loop” when the loop has already closed without them.

Vocabulary is governance infrastructure. If you cannot name the failure mode, you cannot write a policy against it, you cannot test for it, and you cannot tell a regulator how you prevent it.

This week three independent voices put words on three of the most common misnamed problems. The terms are usable today. They belong in board memos, design reviews, RFPs, and incident reports. Together they form a working glossary for AI accountability conversations that, until now, ended in hand-waving.

Cognitive Surrender (Not Cognitive Offloading)

Addy Osmani separates two things our industry has been collapsing into one.

Cognitive offloading is what calculators did to long division and what GPS did to map reading. You delegate a step. You still own the answer. If the calculator says 47, you can sense-check it against the order of magnitude you expected. The shape of the problem stayed in your head.

Cognitive surrender is different. The model produces an artifact (a paragraph, a diff, a board summary, a legal clause), and the artifact arrives shaped like a finished product. There is nothing obviously left to do. You did not delegate a step. You delegated the entire judgment, and the output looks complete enough that verification feels redundant.

The surrender word matters because it names a failure of agency, not a failure of effort. The reviewer is not lazy. The reviewer cannot see what to verify, because the artifact has erased the seams. This is the same failure mode we mapped in the invisible cost of cognitive debt, where the loss is structural rather than visible.

Use this term when someone defends an AI workflow with “they still review it”. Ask: is the reviewer doing cognitive offloading (owning the answer) or cognitive surrender (accepting it because nothing looks broken)? The answer determines whether your governance is real or theater.

Three Inverse Laws (Not Three Laws)

Susam Pal coins a counter-frame to Asimov’s three laws of robotics. Asimov wrote his laws for the robot. Susam writes his for the human. They are guardrails against the most common failure mode in AI deployment: anthropomorphism that quietly transfers accountability from people to machines.

The three inverse laws, in plain terms:

  1. An AI is not a person. Refuse the language and the rituals that imply otherwise. “The model decided”, “the agent chose”, “the system believes” are linguistic delegations of accountability. Replace with “we deployed a model that produced”, “the agent emitted”, “the system returned”.
  2. AI output is not authoritative. A confident answer from a probabilistic system is still a probabilistic answer. The output is evidence, not verdict. Treating it as verdict is what produces the accountability deletion pattern we have been mapping all year.
  3. Delegating to AI is not delegating accountability. A human authorized the deployment, the training data, the prompt, the tool access, and the action surface. That human remains accountable for the outcome. The model is not a co-defendant. There is no co-defendant.

These are blunt. That is the point. Subtle frameworks lose to subtle drift. The inverse laws are formulated to be quotable in a board memo without translation.

Susam’s piece is a personal-domain post, not a peer-reviewed framework. Treat it as a coined-term reference. The value is the framing, which is portable, not the citation weight.

Faithful Uncertainty (Not Hallucination)

Google researchers reframe the hallucination problem this month with a term that solves a real definitional confusion.

“Hallucination” implies the model fabricated something it did not know. The mechanistic reality is closer to a calibration failure. The model held a probability distribution over possible answers, and when it spoke, it spoke with confidence that did not reflect the underlying distribution. The output was not random. The output was uncertain, and the model failed to express the uncertainty.

Faithful uncertainty is the inverse property. A model expresses faithful uncertainty when its stated confidence matches its actual reliability on the answer. A model that says “I am 90% sure” should be right roughly 90% of the time when it makes that claim. A model that says “I am 60% sure” should be right roughly 60% of the time. The calibration target is alignment between expressed and actual.

This reframing changes what governance asks for. It is no longer “did the model hallucinate?” (a binary post-hoc judgment). It is “is the model’s confidence faithful?” (a property you can measure, monitor, and require). Vendors can be held to a calibration curve. Procurement can demand it. Audit can verify it.

We mapped the layered defense pattern for hallucination control earlier this year. Faithful uncertainty gives that architecture a measurable target instead of a moving one.

Why a Glossary Is the Move Right Now

Three observations make this an unusually important quarter for vocabulary work.

First, the regulatory clock. The EU AI Act reaches full enforcement in August 2026. Auditors will arrive with their own definitions if you do not arrive with yours. Organizations whose internal vocabulary already maps to “delegated accountability”, “calibration faithfulness”, and “anti-anthropomorphism guardrails” will write defensible policies in days. Organizations still arguing about what “human oversight” means will spend months on the wrong work.

Second, the procurement surface is hardening. Enterprise RFPs for AI systems increasingly ask vendors to describe their governance vocabulary, not just their architecture. A vendor that can answer “what is your policy on cognitive surrender in user workflows?” is positioned differently from one that hears the question as a foreign language.

Third, the knowledge governance terrain is darkening. As more public discourse becomes AI-mediated, the precision of internal vocabulary becomes a competitive moat. Organizations that name failure modes can avoid them. Organizations that cannot, repeat them.

What to Do This Week

Three concrete moves that compound.

Add the three terms to your AI policy glossary, with one-sentence definitions in your own context. Not as a writing exercise. As a forcing function: if cognitive surrender does not have a one-sentence definition that fits your workflow, you do not have a policy on it.

Run one design review against the inverse laws. Pick an AI feature shipping this quarter. Walk through it with the question: “Does this design imply the AI is a person, that its output is authoritative, or that deploying it transfers accountability away from us?” Document the answers. Most teams find at least one of the three.

Ask your model vendors for calibration curves on faithful uncertainty. Not aggregate accuracy. Calibration. If they cannot produce it, that is the answer to a different question you were also going to need to answer.

The vocabulary is not the governance. The vocabulary is what makes the governance writable.


This analysis synthesizes Cognitive Surrender (Addy Osmani, May 2026), The Three Inverse Laws of Robotics (Susam Pal, May 2026), and Google Rethinks Hallucinations Through Uncertainty (Google Research, May 2026).

Victorino Group helps boards and architects adopt the vocabulary that distinguishes AI mistakes from AI accountability failures. Let’s talk.

All articles on The Thinking Wire are written with the assistance of Anthropic's Opus LLM. Each piece goes through multi-agent research to verify facts and surface contradictions, followed by human review and approval before publication. If you find any inaccurate information or wish to contact our editorial team, please reach out at editorial@victorinollc.com . About The Thinking Wire →

If this resonates, let's talk

We help companies implement AI without losing control.

Schedule a Conversation