- Home
- Thinking...
- Vertical AI and the Governance Gap in Professional Services
Vertical AI and the Governance Gap in Professional Services
In February 2026, the six largest enterprise software companies --- Adobe, Microsoft, Salesforce, SAP, ServiceNow, and Oracle --- lost $730 billion in market value in a single month. Microsoft alone shed more than $450 billion. The market is not punishing these companies for poor earnings. It is repricing the entire category because the consensus is forming: AI does not enhance traditional software. It replaces the work that justified buying it.
This is not a rotation within the software industry. It is a structural repricing of what software can capture. And the clearest signal of where that value is migrating sits in a statistic that most coverage has overlooked.
The Budget Line That Changes Everything
According to NEA’s 2025 analysis, U.S. enterprises spend roughly $450 billion annually on software. They spend $11 trillion on labor. For two decades, enterprise software competed for a share of that $450 billion. Vertical AI competes for a share of the $11 trillion.
Bessemer Venture Partners, which coined the term in a January 2026 playbook, makes the distinction explicit: vertical AI targets personnel budgets, not IT budgets. The addressable market is professional services, which the Bureau of Economic Analysis (FRED data, Q4 2024) measures at 13.20% of U.S. GDP --- approximately $3.2 trillion. That is roughly ten times the size of the entire software industry.
The shift is not hypothetical. CaseText, an AI legal research platform, was acquired by Thomson Reuters for $650 million. Lexion, which automates contract workflows, was acquired by DocuSign for $165 million. Abridge is replacing clinical documentation. EvenUp is assembling personal injury demand letters. Fieldguide is conducting audit procedures. These companies are not selling tools to professionals. They are performing the cognitive work those professionals used to do.
Bessemer reports that LLM-native companies in this category are growing at approximately 400% year-over-year, reaching 80% of the annual contract value of traditional SaaS, with around 65% gross margins. At least five are projected to reach $100 million in annual recurring revenue within two to three years. The economics work. The trajectory is clear.
Most analysis stops here. Opportunity identified, market sized, winners predicted. But the story that matters --- the one almost nobody is telling --- is what this shift demands from the organizations that deploy it.
Why This Is Not a Software Problem
When AI competed for IT budgets, governance was relatively contained. The CIO bought a tool. The IT team configured it. Usage policies were written. Security reviewed the vendor. The organizational surface area was limited because the tool augmented existing workflows without replacing the people who executed them.
Vertical AI breaks this model completely.
When an AI system interprets contracts, it is not assisting a lawyer. It is performing legal analysis. When it generates preliminary medical opinions, it is not supporting a clinician. It is practicing diagnostic reasoning. When it produces audit procedures, it is not helping an auditor. It is conducting the audit.
The governance implications of this shift are fundamentally different from anything the enterprise software era required. Three structural changes demand attention.
The liability surface expands. Software that helps a professional work faster leaves liability with the professional. AI that performs the cognitive work transfers liability to the organization that deployed it. If an AI-generated contract analysis misses a material risk, the question is not whether the lawyer reviewed it carefully enough. The question is whether the organization had governance infrastructure to ensure the AI’s output met professional standards. This is a new category of organizational risk that most professional services firms have no framework to manage.
Professional standards apply to machines. Professional services exist within regulatory frameworks --- bar associations, medical boards, accounting standards bodies, engineering licensure. These frameworks assume human practitioners. When AI performs the work, the standards do not disappear. They apply to the organization deploying the AI, which now bears the burden of demonstrating that its AI systems meet standards designed for human cognition. No major professional standards body has fully addressed this. The governance gap is regulatory, not just operational.
Employment decisions become AI decisions. When AI competes for personnel budgets, deploying it is not an IT decision. It is a workforce decision. Bessemer’s playbook describes this approvingly --- revenue growing without proportional headcount. But the organizational reality is that someone must decide which roles are augmented, which are replaced, and how the transition is managed. These are decisions with legal, ethical, and reputational consequences that no IT governance framework was designed to address.
Progressive Delegation Requires Progressive Governance
Bessemer’s playbook introduces a useful framework: the spectrum from copilots (AI assists humans) to agents (AI acts autonomously) to AI-enabled services (AI delivers the service directly). They call the transition “progressive delegation.”
The framework is accurate. What it omits is that each step in the delegation spectrum requires a corresponding step in governance infrastructure.
A copilot that suggests contract language requires review mechanisms. The human remains in the loop. Governance is lightweight: ensure the professional reviews the suggestion before it becomes work product.
An agent that drafts entire contract analyses requires verification infrastructure. The human reviews output, not process. Governance must include quality assurance protocols, output validation standards, and clear accountability for when the agent’s work is wrong.
An AI-enabled service that delivers contract analysis directly to clients requires institutional governance. There is no human in the loop for individual work products. Governance must encompass professional liability frameworks, regulatory compliance monitoring, client disclosure requirements, and systematic quality auditing.
Most organizations adopting vertical AI are planning for the copilot stage while purchasing technology built for the agent stage. The governance gap between where they are and where their tools will take them is widening, not narrowing.
The Hybrid Category Problem
Marcelo Amorim, writing for Quartzo Venture Capital, identifies something the U.S. coverage has largely missed: vertical AI creates a new hybrid category that is software, services, and AI natively combined. It is not a software company that uses AI. It is not a services firm that deploys software. It is something that did not exist before --- an organization that delivers professional-grade cognitive work through technology, at software-like margins and scale.
This hybrid category breaks existing governance frameworks because those frameworks were designed for the categories that preceded it.
Software governance assumes the software is a tool and the human is the practitioner. Services governance assumes the practitioner is a human with professional credentials and individual accountability. The hybrid category has neither a clear tool-practitioner boundary nor individual human accountability for cognitive output.
Consider a concrete example. A vertical AI company delivers medical screening analysis. Is it a software vendor subject to SOC 2 and HIPAA technical safeguards? Is it a healthcare provider subject to clinical quality standards and malpractice liability? Is it a medical device subject to FDA regulation? The answer may be all three, or none, depending on how the product is structured and how regulators eventually classify it. Organizations deploying these products inherit this ambiguity.
The firms that Amorim describes --- small specialized teams augmented by AI delivering work volumes previously requiring large organizations --- face this ambiguity at its most acute. They have the output of a large firm, the governance infrastructure of a small one, and the regulatory exposure of something that has no established precedent.
What Organizations Actually Need
The conversation about vertical AI is dominated by opportunity sizing. The conversation that is missing is about the organizational infrastructure required to capture that opportunity without creating unmanageable risk.
Four capabilities matter.
Professional-grade output verification. When AI performs cognitive work previously done by credentialed professionals, the output must be verified against professional standards, not just technical accuracy. An AI-generated legal analysis that is technically correct but strategically wrong is a governance failure, not a model failure. Verification infrastructure must be domain-specific, not generic.
Liability architecture. Organizations need clear, documented answers to the question: when AI-generated work product causes harm, who is accountable? The answer cannot be “the AI” or “nobody.” It must be a specific organizational function with defined authority, budget, and escalation paths. This is not a legal question alone. It is an organizational design question.
Regulatory translation. Professional standards bodies are moving slower than the technology. Organizations deploying vertical AI must interpret existing professional standards in the context of AI delivery and document their compliance posture before regulators finalize their frameworks. Waiting for regulatory clarity is not a strategy. It is a bet that nothing will go wrong before the rules arrive.
Workforce transition governance. When AI competes for personnel budgets, the workforce implications cannot be managed as a side effect of technology deployment. Retraining programs, role redefinition, transition timelines, and stakeholder communication require deliberate governance, not afterthoughts.
The Question That Matters
Vertical AI will reshape professional services. The $3.2 trillion market, the 400% growth rates, the successful acquisitions, the economic logic of software-leveraged delivery --- none of this is in doubt. The market has already priced it in, as the $730 billion in enterprise software losses attest.
The question that actually matters is different: Do the organizations deploying vertical AI have the governance infrastructure to do it responsibly?
The honest answer, for most organizations today, is no. They have IT governance frameworks designed for software tools, compliance programs designed for human practitioners, and risk management approaches designed for a world where professional judgment was always human judgment.
Vertical AI makes governance more important, not less. When the stakes were limited to software efficiency, governance failures meant wasted IT budgets. When the stakes include professional liability, employment decisions, and the replacement of human cognitive work, governance failures mean something else entirely.
The organizations that will lead in this era are not the ones that adopt vertical AI fastest. They are the ones that build the governance infrastructure to deploy it at the standard their clients, regulators, and workforce require.
Sources
- NEA. “U.S. Enterprise Labor vs. Software Spend Analysis.” 2025.
- Christine Deakers. “Building Vertical AI: An Early-Stage Playbook for Founders.” Bessemer Venture Partners, January 5, 2026.
- Marcelo Amorim. “Vertical AI: quando os serviços profissionais passam a escalar além do SaaS.” Quartzo Venture Capital, February 13, 2026.
- U.S. Bureau of Economic Analysis. “Value Added by Industry: Professional and Business Services.” FRED, Q4 2024.
- Thomson Reuters. “Acquisition of CaseText.” Press release, 2023.
- DocuSign. “Acquisition of Lexion.” Press release, 2024.
- Satya Nadella. Public statements on AI agents and business applications, February 2026.
At Victorino Group, we help organizations build the governance infrastructure that vertical AI deployment requires --- from liability architecture to professional-grade verification systems. If you are evaluating vertical AI for your organization, let’s talk.
If this resonates, let's talk
We help companies implement AI without losing control.
Schedule a Conversation