- Home
- The Thinking Wire
- You Cannot Govern What You Cannot Simulate
James Cham, a venture capitalist who has been thinking about AI longer than most, put it this way: “Work used to be first person shooter, where you’re directing every movement and every shot. It might become more like Starcraft, where you have to move people and agents around to achieve your objectives.”
The analogy is better than it sounds. In a first-person shooter, you control one character. You see what they see. You pull the trigger. In Starcraft, you command dozens of units across a map you can only partially observe. You issue orders, allocate resources, react to incomplete intelligence. The skill is not aiming. The skill is deciding where to look, what to build, and when to retreat.
Companies deploying AI agents at scale are making this transition right now. And most of them are playing Starcraft with first-person-shooter instincts.
The Question Nobody Can Answer
Ask a VP of operations at a company running thirty AI agents a simple question: “If we change the escalation threshold on our customer service agents from 3 minutes to 5 minutes, what happens to resolution rates, customer satisfaction, and agent cost next quarter?”
They cannot answer it. Not because they lack intelligence or data, but because no system connects those variables in a way that produces a testable prediction. They have dashboards. They have monitoring. They have logs. What they do not have is a model of how their business actually works when agents are part of it.
Rohit Krishnan, writing in Strange Loop Canon, argues this is the central missing piece for agent-heavy organizations. He calls it the “enterprise world model.” The concept borrows from autonomous driving, where companies like Waymo and Tesla build simulation engines that model how the physical world responds to a vehicle’s actions. A world model answers one question: if I do X, what happens?
Krishnan’s argument: orchestration platforms, observability tools, RL environments, enterprise software suites. All of these are “features of the enterprise world model.” None of them, alone or together, answers the question. The enterprise world model is the integration layer that turns data into prediction.
Why Existing Tools Fall Short
Consider what companies already have.
Orchestration tells you what agents are doing right now. It routes tasks, manages queues, handles failures. It is present tense. Ask it “what would happen if” and it has nothing to say.
Observability tells you what agents did. Logs, traces, metrics. It is past tense. You can spot patterns after they emerge. You cannot test interventions before you deploy them.
Analytics tells you what happened to business metrics. Revenue went up. Churn went down. But correlation is not mechanism. Did churn drop because the agents responded faster, or because a competitor raised prices? Analytics shows outcomes without causation.
RL environments let agents learn through trial and error. But they optimize individual agent behavior within fixed parameters. They do not model how changing one agent’s behavior cascades through the rest of the system.
The enterprise world model sits above all of these. It connects them. It asks: given the current state of the business (from observability), the current agent configuration (from orchestration), and the current market conditions (from analytics), what happens if we change variable Y by Z percent?
As we explored in The Agent Operations Paradox, adding more agents creates compounding operational complexity. Three forces interact: agents as team members, infrastructure unreliability, and rising expectations. An enterprise world model would let you simulate these interactions before they surprise you in production.
The Driving Analogy, Extended
The parallel to autonomous driving is instructive and goes deeper than naming conventions.
Waymo does not deploy a self-driving car and hope it performs well. Before any car touches a public road, it runs millions of simulated miles. The simulation engine models physics, traffic patterns, pedestrian behavior, weather, sensor noise. Engineers change a parameter (say, following distance in rain) and observe cascading effects across thousands of scenarios.
Now consider how most companies deploy AI agents. They build the agent. They test it on a handful of scenarios. They deploy it. They monitor dashboards. When something goes wrong, they diagnose it after the fact and patch. There is no simulation step. There is no “what happens if we change the escalation threshold” before the change goes live.
The distance between these two approaches is architectural, not a maturity problem. The simulation infrastructure for business operations simply does not exist the way it exists for driving. Building it requires connecting financial systems, CRM data, agent behavior logs, customer interaction patterns, and market signals into a coherent model that can run forward in time.
What a World Model Actually Enables
Krishnan offers a concrete example. A real estate company discovers that properties where managers respond to inquiries within 20 minutes convert at twice the rate of those where response takes longer. Useful insight. But it is backward-looking.
An enterprise world model would take that insight and ask: if we deploy an AI agent to handle initial responses and guarantee sub-5-minute reply times, what happens to conversion rates? What happens to manager workload? Does the agent’s response quality affect downstream conversion differently than a human’s? What is the cost curve? At what volume does the agent’s API cost exceed the revenue from incremental conversions?
These are not hypothetical questions. They are the questions that executives need answered before making deployment decisions. Today, those decisions are made on intuition, pilot results from small samples, and vendor promises.
The shift from reactive monitoring to predictive simulation changes the nature of management itself. As we examined in From In-the-Loop to On-the-Loop, the companies that succeed with AI agents are building systems around agents rather than reviewing individual outputs. An enterprise world model is the logical endpoint of on-the-loop management: instead of watching what agents do and intervening when things break, you simulate interventions before deploying them and monitor for deviations from predicted outcomes.
Management becomes, as Krishnan puts it, “triage and simulation.” Review the deltas between predicted and actual performance. Score outcomes. Simulate proposed interventions. Deploy the ones that model well. The manager’s job is not directing agents. It is tuning the model.
The Honest Objection
There is a serious objection to this vision, and it showed up in the comments on Krishnan’s piece. Companies are not physics simulations. They are social and political systems. People resist formalization. Departments protect territory. Incentives misalign in ways that no model captures.
This objection deserves respect because it is historically correct. Every attempt to build a “complete model” of a business has failed. ERP systems promised this in the 1990s. Business process reengineering promised it before that. The models were always incomplete, and the incompleteness was not in the data. It was in the human behavior that resisted being modeled.
A second objection: better visibility may produce worse outcomes. When managers can see everything and simulate everything, they intervene more, not less. The temptation to micro-optimize is real. Sometimes the right management decision is to leave a system alone and let local actors adapt. A world model that makes intervention easy could produce a culture of constant tinkering that destabilizes the system it claims to optimize.
Both objections are valid. Neither is fatal.
The difference between the 1990s ERP vision and an enterprise world model for agent-governed organizations is that agents, unlike humans, actually do behave in formally describable ways. Their decision logic is inspectable. Their inputs and outputs are logged. Their behavior under different configurations is testable. The part of the business that agents handle is, in principle, simulatable in ways that human-driven processes never were.
The human layer remains messy, political, and resistant to formalization. But if 60% of your operational decisions are being made by agents (a number some companies are approaching), modeling that 60% accurately is enormously valuable even if the remaining 40% stays opaque.
What This Means for Governance
Here is where the concept connects to something concrete.
You cannot govern what you cannot predict. Governance frameworks today ask: what are the agents doing? Are they compliant? Are they within policy bounds? These are necessary questions. They are also insufficient.
The governance question that matters is: what will happen if we change this policy, this threshold, this agent configuration? Without a simulation layer, policy changes are experiments run in production on live customers and live revenue. With a simulation layer, they are hypotheses tested before deployment.
OpenAI appears to understand this. Krishnan notes that OpenAI is manually building something resembling an enterprise world model through its Thrive Capital partnership, embedding engineers directly into portfolio companies to understand how business systems connect. The approach is labor-intensive and does not scale. But it signals that the most resourced AI company in the world sees this as a necessary layer.
The companies that build this simulation capability first will have a structural advantage. Not because they have better agents, but because they can tune their agents with confidence instead of hope. They will run fewer failed experiments, waste less time on interventions that backfire, and catch cascading failures before they cascade.
That is a governance advantage, not a technology one. And in a world where every company is becoming an agent-heavy organization, governance is the competitive surface that matters most.
This analysis synthesizes Rohit Krishnan’s “The Enterprise World Model” (March 2026) in Strange Loop Canon, including commentary from James Cham on the shift from first-person management to strategy-game management of AI agent workforces.
Victorino Group helps organizations build governance infrastructure for AI agent operations, from simulation to production. Let’s talk.
All articles on The Thinking Wire are written with the assistance of Anthropic's Opus LLM. Each piece goes through multi-agent research to verify facts and surface contradictions, followed by human review and approval before publication. If you find any inaccurate information or wish to contact our editorial team, please reach out at editorial@victorinollc.com . About The Thinking Wire →
If this resonates, let's talk
We help companies implement AI without losing control.
Schedule a Conversation