- Home
- The Thinking Wire
- AI-Only Is What AI-First Was Supposed to Mean, And Most Boards Don't Know the Difference
AI-Only Is What AI-First Was Supposed to Mean, And Most Boards Don't Know the Difference
In the same week of April 2026, two pieces landed that, read together, reframe the entire enterprise AI conversation.
Daniel Schreiber, CEO of Lemonade, published an essay arguing that “AI-First”, the slogan most companies have spent the last two years adopting, was always a transitional concept. The honest endpoint, he argued, is “AI-Only.” His definition is worth reading slowly:
“Extensive workflows in which no human sits inside the operating loop. Humans still set the goals, values, constraints, and the conditions for escalation, but execution runs end-to-end on machines.”
Days later, Ann Miura-Ko (Floodgate) circulated a six-level framework for AI adoption, borrowed in spirit from the SAE autonomy levels used in self-driving cars. Per the framework summarized in TLDR FOUNDERS, most “AI-forward” companies still operate at Level 1 (personal productivity) or Level 2 (functional silos). Level 3, agents acting across CRMs, code, and tickets via MCP and shared skills, is the rare exception. Levels 4 through 6 are mostly aspiration.
These two pieces are not in tension. They are the same diagnostic, viewed from two ends of the telescope.
Schreiber tells us where the road actually leads. Miura-Ko tells us how few cars are anywhere near it. And the gap between them, between the rhetoric of AI-First and the operational reality of AI-Only, is not a model problem. It is an operating-layer problem. Which is what we have been calling, in this series, the governance gap.
The Honesty of AI-Only
“AI-First” was always doing too much work. For some leaders it meant try AI before defaulting to human work. For others it meant every product surface gets a chatbot. For most it meant nothing operationally specific, a posture, not a target.
Schreiber’s contribution is to remove the ambiguity. AI-Only is not a vibe. It is a structural claim about who sits inside the loop. In an AI-Only workflow, the human is outside the execution path. They define the goal, the policies, the constraints, the escalation conditions, and then they get out of the way. The machine runs the work end-to-end and only surfaces back when one of the human-defined conditions is tripped.
This is what AI-First was supposed to mean. Most companies that say “AI-First” actually mean AI-assisted: a human still drives, the model just helps. That is a real productivity gain, but it is not the destination Schreiber is describing. It is Level 1 or Level 2 on the Miura-Ko ladder. The destination is workflows where the human role moves from operator to specifier.
Lemonade’s self-reported numbers back the framing. Schreiber claims roughly 98% of the company’s code is now AI-written. Headcount is smaller than it was. Revenue tripled in the period. Whether or not those exact numbers reproduce in other contexts is a fair question, what matters strategically is that one CEO is willing to publicly stake the claim and define the operating model that produces it.
Most CEOs are not willing to do that. Not because they disagree, but because they cannot see the path from where they are to where Schreiber is describing.
The Ladder Most Companies Are Stuck On
Miura-Ko’s six levels, adapted in spirit from autonomous-vehicle autonomy levels, give that path a shape. The framework is summarized in public commentary rather than a long-form paper, so we hold the level definitions loosely; what matters is the distribution.
The bottom rungs are crowded. Level 1 is personal productivity: individual employees using ChatGPT, Copilot, or Claude for their own tasks. Level 2 is functional silos: marketing has its tools, support has its tools, engineering has its tools, and they do not talk to each other. The vast majority of companies that describe themselves as “AI-forward” live here.
Level 3, where agents act across systems via shared infrastructure like MCP, is where the operational shape changes. Agents at Level 3 are no longer assistants attached to one person’s workflow. They are participants in the company’s workflows, with read and write access to CRMs, code, tickets, and data. This is where the operating layer beneath the models becomes load-bearing.
Levels 4, 5, and 6 ascend further into autonomy and integration. They are the territory Schreiber’s AI-Only definition describes operationally.
The honest read of the framework, even held loosely, is this: the gap between Level 2 and Level 3 is not about better models. It is about the operating layer that lets agents act across systems without breaking them. Most companies do not have that layer. So they stay at Level 2, and call it AI-First.
The Operating Layer Is the Whole Story
We have been circling this idea across this series. The OpenAI 6× productivity gap. The Fortune 500 governance pace. The McKinsey six levels. The Yegge eight levels. The Ably AI-first culture work. The 5Rs framework. Each of those pieces names the same shape from a different angle. Schreiber and Miura-Ko, this week, name it more directly.
The thing in the middle, the thing that is missing in companies stuck at Levels 1 and 2, is what we will call here the operating layer.
The operating layer is the substrate beneath the models. It is what makes an agent’s action safe to take, observable after it is taken, and reversible if it was wrong. Concretely it includes at minimum:
- Identity and authorization: agents have credentials of their own, scoped to specific actions on specific systems.
- Action policy: what an agent is allowed to do is declared, not implicit; what is out of scope returns a refusal, not a guess.
- Escalation conditions: the human-in-the-loop is not a constant overhead but a defined trip-wire, Schreiber’s “conditions for escalation.”
- Audit and reconstruction: every agent action is logged in a form a human can read after the fact, so retrospection is possible.
- Reversibility: writes go through paths that can be undone or compensated.
None of this is novel software engineering. What is novel is that most companies have never built it for AI agents specifically. They have it for humans (RBAC, audit logs, change management) and they have it for batch systems (idempotency keys, dead-letter queues). They do not have it for an agent that wakes up, reads a CRM, sends an email, and updates a record, because until recently no such actor existed.
The operating layer is the work that has to happen between Level 2 and Level 3. It is the work that has to happen between AI-First rhetoric and AI-Only reality. And it is exactly the work that does not get done when “AI-First” is treated as a posture instead of a target.
What Schreiber Is Not Saying, And What He Is
Schreiber is direct about the social cost of the model he is describing. In his own words:
“growing and permanent unemployment … the social cost will land on workers, families, and communities.”
This is not a slogan from a critic. It is the CEO making the case for AI-Only acknowledging what it implies. The honesty is itself notable. Most companies operating somewhere between Level 1 and Level 3 have not done this math out loud, and as a result have not had to defend it.
That avoidance is part of why so many AI strategies stall. A board that has not internalized what AI-Only actually entails, operationally and socially, will not approve the investments needed to build the operating layer. They will fund tools, pilots, and training. They will not fund the substrate. So the company stays at Level 2, generating PowerPoint metrics about AI adoption while the workflows remain human-driven.
The companies that will reach Level 3 and beyond are the ones whose boards understand both ends of Schreiber’s frame: the operating definition and the social cost. You cannot decide to pay one without confronting the other.
What This Means for the Operating Layer
For boards and executive teams reading this in mid-2026, the practical move is narrower than it sounds.
Stop treating AI-First as a destination. It is a posture. The destination is a defined set of workflows where the human moves from operator to specifier, Schreiber’s AI-Only frame applied to the two or three workflows where the math works for your business. Pick those workflows explicitly.
Audit your current level honestly. If your AI program is mostly individuals using assistants in their own work, you are at Level 1. If functions have their own agents that do not coordinate, you are at Level 2. There is no shame in either, but there is no path to Level 3 from a Level 2 self-image. The diagnosis has to be honest before the investment can be sized.
Fund the operating layer, not just the models. Most enterprise AI budgets in 2026 still go to model access, vendor contracts, and individual tooling. The constraint between Level 2 and Level 3 is rarely model quality. It is identity, policy, escalation, audit, and reversibility, the unglamorous substrate that makes agent action safe to authorize. If your CFO has never seen a line item for that substrate, you are not going to Level 3.
Define escalation conditions before you deploy autonomy. Schreiber’s frame is explicit: humans set the conditions for escalation, and execution runs end-to-end until one trips. That sentence is operationally rich. Most companies deploying agents have not enumerated those conditions, which means the agent either escalates everything (Level 1 in disguise) or escalates nothing (the kind of failure that ends programs).
Hold the social question at board level, not below it. Schreiber put the “growing and permanent unemployment” line in his own essay because the model demands it. A board that delegates this question to HR or comms is not running an AI-Only company. It is running a Level 2 company with AI-Only marketing copy.
The frames converge. AI-First was supposed to mean what AI-Only now actually says. Miura-Ko’s ladder shows the climb. The rung between where most companies sit and where the frame describes is the operating layer beneath the models.
That layer is the work. Everything else, model selection, tool licenses, training programs, is downstream of it.
This analysis synthesizes After AI-First Comes AI-Only (Daniel Schreiber / Lemonade, April 2026) and Miura-Ko’s 6-level AI adoption framework (Ann Miura-Ko / Floodgate, May 2026).
Victorino Group helps boards close the gap between AI-First rhetoric and AI-Only reality by building the operating layer beneath the models. Let’s talk.
All articles on The Thinking Wire are written with the assistance of Anthropic's Opus LLM. Each piece goes through multi-agent research to verify facts and surface contradictions, followed by human review and approval before publication. If you find any inaccurate information or wish to contact our editorial team, please reach out at editorial@victorinollc.com . About The Thinking Wire →
If this resonates, let's talk
We help companies implement AI without losing control.
Schedule a Conversation