- Home
- The Thinking Wire
- Seven Requirements for Institutional AI: What Individual Productivity Cannot Buy
Seven Requirements for Institutional AI: What Individual Productivity Cannot Buy
Since we wrote about The Institutional AI Gap, the response confirmed what we suspected: organizations feel this problem acutely but lack vocabulary for it. They know individual AI productivity is not translating into institutional value. They do not know what, specifically, is missing.
George Sivulka, CEO of Hebbia, published the missing piece in an a16z essay this month. Where our analysis diagnosed the structural disconnect and traced its historical precedent, Sivulka names seven specific requirements that separate individual AI tools from institutional AI systems. The framework is worth examining in detail because it converts a diagnosis into a checklist.
His opening line captures the problem with precision: “AI just made every individual 10x more productive. No company became 10x more valuable as a result.”
The Seven Requirements
Sivulka’s framework is not a maturity model or a capability ladder. It is a list of structural properties that organizations need to build before AI scales beyond the individual. Each one represents a dimension where personal productivity tools fail to produce institutional outcomes.
I will walk through all seven, adding a governance lens that Sivulka’s framework implies but does not fully develop.
1. Coordination: From Parallel to Convergent
Individual AI creates parallel productivity. Ten people work faster in isolation. Institutional AI creates convergent productivity: ten people producing outputs that fit together.
The coordination requirement is first because it is foundational. Without it, every other requirement produces faster divergence. As we documented in The Amplifier Effect, AI accelerates whatever organizational dynamic already exists. If your teams coordinate poorly without AI, they will coordinate catastrophically with it.
Governance implication: coordination requires explicit decision rights, defined handoff points, and shared quality standards. These are not project management artifacts. They are the operating system that converts individual acceleration into collective velocity.
2. Signal Detection: Deterministic Over Nondeterministic
When production becomes cheap, evaluation becomes the bottleneck. Sivulka argues that institutional AI must favor deterministic signal detection over probabilistic guessing. The organization needs systems that reliably surface what matters from the flood of AI-generated output.
This is where most organizations are drowning. More code gets written, more content gets drafted, more analyses get produced. The volume has increased. The ability to evaluate which outputs deserve attention has not kept pace.
The solution is not more AI. It is better filters. Automated review of automated output, with clear escalation criteria and human decision points where the stakes warrant them.
3. Objectivity: The Disciplined No
This is Sivulka’s most provocative requirement. Foundation models, trained through reinforcement learning from human feedback, are structurally inclined to agree with users. For individuals, this feels helpful. For organizations, it is poison.
“The most valuable agents inside organizations will be disciplined no-men that interrogate reasoning.”
That quote deserves to be printed on every AI governance policy. Organizations rarely fail because people lack confidence or conviction. They fail because no one is willing, or structured, to challenge the consensus. If your AI systems confirm every assumption and validate every strategy, you have built a sycophancy engine at institutional scale.
Governance implication: AI systems used for institutional decisions must be designed to challenge, not confirm. This means building review agents, contradiction detection, and assumption-testing into workflows. The AI that saves the company will be the one that says no.
4. Domain-Specific Edge
General-purpose AI tools commoditize fast. ChatGPT, Claude, Gemini: these are table stakes. Every competitor has access to the same capabilities.
The institutional edge comes from AI systems that encode proprietary knowledge, organizational logic, and domain-specific data that competitors cannot replicate. Sivulka’s context window data illustrates the scale: windows grew from 4,000 to over 1 million tokens in four years. Some enterprise users now process 30 billion tokens per job. That volume of processing is only valuable if the data being processed is uniquely yours.
This maps directly to what Kent Beck identified as the overlooked value levers. The biggest returns come not from doing existing work faster, but from enabling work that was previously impossible. Domain-specific AI makes previously impossible analysis tractable. General-purpose AI just makes existing analysis quicker.
5. Revenue Outcomes, Not Productivity Metrics
Most individual AI usage optimizes for speed. Faster drafts, faster code, faster analysis. Sivulka’s fifth requirement forces the question: did any of that speed produce revenue?
The distinction is not academic. We have seen organizations where AI-assisted developers ship 3x more code, but defect rates climb, review bottlenecks multiply, and the net effect on delivery time is zero. Berkeley researchers found a parallel pattern: AI intensifies work rather than reducing it, because productivity gains get absorbed by expanded scope rather than converted to outcomes.
Institutional AI must be measured on business results. Revenue per employee. Cycle time to customer value. Defect rates. Decision quality. If the only metric improving is “volume of stuff produced,” the institution is running harder without moving forward.
6. Change Enablement: Process Before Technology
Sivulka points to Palantir’s success as evidence that the winning AI companies are really process engineering companies. They succeed not because their models are better, but because they encode organizational processes into software.
This is the requirement that most organizations skip. They deploy AI tools without first mapping the workflows those tools are supposed to accelerate. The result is predictable: the AI automates an undefined process and produces undefined results.
BCG’s finding that 70% of AI failures trace to people and process, not technology, is the statistical backing for this requirement. You cannot engineer change through a tool. You engineer it through process clarity, role definition, and explicit workflow documentation. The AI comes after.
7. Unprompted Action: From Reactive to Proactive
The final requirement distinguishes institutional AI from everything that came before. Individual AI waits for a prompt. You ask a question, you get an answer. Institutional AI operates continuously: monitoring for risks, identifying opportunities, flagging anomalies that no one thought to ask about.
This is where the “agent” label finally earns its name. Not a chatbot that responds to queries, but a system that initiates action based on institutional priorities. A compliance monitor that surfaces regulatory exposure before the audit. A market intelligence system that detects competitive shifts before the quarterly review. A quality gate that flags degradation before customers notice.
Proactive operation requires all six preceding requirements. Without coordination, proactive agents create chaos. Without signal detection, they generate noise. Without objectivity, they reinforce existing blind spots. Without domain specificity, they surface generic observations. Without outcome orientation, they optimize for the wrong targets. Without process clarity, they automate the wrong workflows.
What the Framework Reveals
Sivulka’s seven requirements, taken together, expose a truth that the AI industry prefers not to advertise. The technology is not the hard part. Context windows, model capabilities, inference speed: these improve on a predictable curve. The hard part is institutional.
“Pure software is rapidly becoming uninvestable,” Sivulka writes. The implication is stark. If the technology layer commoditizes, the only durable advantage is organizational. The company that builds institutional AI capability wins not because it has better models, but because it has better coordination, clearer processes, and stronger governance.
This is what we meant when we described the institutional AI deficit using the electrification parallel. The electricity is installed. The factory remains unchanged. Sivulka’s framework tells you exactly which walls to move.
The Governance Thread
Every one of Sivulka’s seven requirements is, at its core, a governance requirement. Coordination requires decision rights. Signal detection requires evaluation criteria. Objectivity requires challenge mechanisms. Domain edge requires data governance. Revenue orientation requires outcome measurement. Change enablement requires process documentation. Proactive operation requires monitoring frameworks.
The organizations that will close the distance between individual AI productivity and institutional value are not the ones with the best tools. They are the ones that build the governance infrastructure to make those tools converge on outcomes.
The seven requirements are the blueprint. The governance layer is the foundation they rest on.
This analysis builds on Institutional AI vs Individual AI (March 2026) by George Sivulka, CEO of Hebbia.
Victorino Group helps enterprises close the distance between individual AI productivity and institutional value, from structural diagnosis to governance implementation. Let’s talk.
If this resonates, let's talk
We help companies implement AI without losing control.
Schedule a Conversation