The AI Control Problem

The Governance Layer: What the Cybersecurity Selloff Reveals About AI-Era Security

TV
Thiago Victorino
10 min read

On February 20, 2026, Anthropic announced Claude Code Security — a vulnerability detection tool that uses reasoning-based analysis instead of pattern matching. Within hours, the BUG Cyber ETF dropped roughly 5%, hitting its lowest level since November 2023. CrowdStrike fell 8%. Cloudflare fell 8.1%. Okta fell 9.2%. SailPoint fell 9.4%. Qualys reportedly fell 12%.

The market moved fast. The market also moved wrong — or at least, it moved for the wrong reasons.

The selloff is not evidence that AI will replace cybersecurity companies. It is evidence that the market finally noticed something that has been true for months: the detection layer of security is being compressed, and the companies whose value proposition depends primarily on finding known patterns are repricing accordingly.

But the full security function — governance, compliance, identity management, incident response — is not being compressed. It is expanding. And the distinction between what is shrinking and what is growing is the most important signal in this entire event.

The Context the Market Ignored

Here is the number that reframes the selloff: the broader software sector was already down approximately 23% year-to-date before Anthropic’s announcement. CrowdStrike had already lost 22% of its value. Atlassian, Datadog, and Workday — companies with no direct exposure to AI vulnerability scanning — dropped 4.6% to 10% the same week.

Claude Code Security did not cause a repricing. It accelerated one that was already underway.

Barclays called the selloff “incongruent” with the actual product, which at announcement was a limited research preview. Jefferies echoed the sentiment. The market was not responding to a product. It was responding to an idea — the idea that reasoning-based AI can compress the value of pattern-matching security tools.

That idea is correct. The magnitude of the response is not.

What Claude Code Security Actually Does

Anthropic’s tool uses multi-stage self-verification to find vulnerabilities. It reasons about code rather than matching against known patterns. It claims to have found over 500 vulnerabilities “undetected for decades.”

Three things about that claim deserve careful handling.

First, Anthropic has published no methodology for how those vulnerabilities were identified, no false positive rate, and no independent verification. “500 vulnerabilities” is a marketing number until it is a research number. The distinction matters.

Second, the tool requires human-in-the-loop review for all patches. This is not a limitation — it is a design choice that reflects the current state of AI capability. The tool finds. Humans decide. That boundary is important and intentional.

Third, reasoning-based vulnerability detection is a genuine advancement over pattern matching. Finding novel vulnerabilities in legacy codebases is something that static analysis tools have struggled with for decades. If the claims hold under scrutiny, this is a meaningful capability improvement.

But capability improvement in detection is not the same as replacement of the security function. And this is where the market’s reaction reveals its own misunderstanding.

The Detection Value Chain Is Compressing

Here is what is actually happening: detection is becoming commodity.

If an AI model can reason about code to find vulnerabilities, the marginal cost of finding the next vulnerability approaches the cost of inference. This compresses the value of any security product whose primary offering is “we find bugs.”

Qualys, which dropped the most, is essentially a vulnerability scanning company. Its core value proposition is detection. When detection becomes cheap, the value of a detection-focused company declines. The market priced this correctly.

But CrowdStrike’s value proposition is not detection. It is endpoint protection, threat intelligence, incident response, and platform integration. Okta’s value proposition is identity management. SailPoint’s is identity governance. These are not detection companies. They are governance companies that happen to include detection as a feature.

The market treated them as if they were all the same. They are not.

The Governance Problem Is Getting Larger, Not Smaller

While the market was panicking about AI replacing security tools, the actual data tells a different story: the governance problem is expanding faster than AI can compress it.

The Kiteworks 2026 report, surveying 225 security leaders, found that 63% of organizations cannot enforce purpose limitations on their AI agents. Sixty percent cannot terminate misbehaving agents. These are not detection problems. They are governance problems — and they have no AI shortcut.

The Strata Identity 2026 report found that only 23% of organizations have formal agent identity management strategies. Ninety-two percent are not confident their legacy identity and access management systems can handle AI and non-human identity risks.

The Docker State of Agentic AI 2026 survey (800+ respondents from Docker’s developer community — a self-selected sample, worth noting) found that 40% of organizations cite security as the number one barrier to scaling AI agents. Forty-five percent struggle with enterprise-ready tooling. Seventy-six percent have vendor lock-in concerns.

Read those numbers together. Organizations cannot control the AI agents they have already deployed, cannot manage identities for non-human actors, and cannot scale their agent infrastructure because the governance layer does not exist yet.

This is not a market that AI vulnerability scanning threatens. This is a market that AI vulnerability scanning cannot address.

The Two-Sided Problem

Here is the insight that most analysis of the selloff misses entirely.

Organizations face two simultaneous security challenges that look different but are structurally identical:

Problem one: securing AI agents. As organizations deploy autonomous agents, they need to govern what those agents can access, what actions they can take, and how to contain damage when they misbehave. Docker’s data says 40% of organizations identify this as their top scaling barrier.

Problem two: using AI to secure code. Anthropic’s Claude Code Security represents this side — using AI reasoning to find vulnerabilities that traditional tools miss.

These are not separate trends. They are the same governance problem viewed from opposite ends. In both cases, the core challenge is the same: how do you establish and enforce boundaries on AI behavior? How do you verify that an AI system — whether it is an agent accessing your production database or a security tool patching your code — is operating within acceptable limits?

The organization that solves this for their AI agents is building the same governance muscle required to deploy AI security tools safely. The organization that cannot govern their agents also cannot trust AI-generated vulnerability patches.

Governance is the common denominator.

Agent Identity: Expansion, Not Extinction

The market narrative frames AI as a threat to the identity security market. The data suggests the opposite.

As AI agents proliferate in enterprise environments, the number of non-human identities requiring management is growing exponentially. Strata’s finding that only 23% of organizations have formal agent identity strategies means 77% have an unsolved problem — a problem that requires exactly the kind of identity governance that companies like Okta and SailPoint provide.

The question is not whether these companies will be disrupted. It is whether they will expand their products to cover agent identity before someone else does.

Google’s $32 billion acquisition of Wiz in February 2026 is instructive here. If AI were genuinely going to eliminate the need for cloud security, Google — a company building the AI — would not spend $32 billion buying a cloud security company. The acquisition is a bet that cloud security is expanding, not contracting. The market signal and the capital allocation signal are pointing in opposite directions. One of them is wrong.

What the Multiples Actually Tell You

Clayton Petty, a partner at Gradient Ventures (Google’s AI-focused venture fund — a relevant disclosure), reported that median cybersecurity EV/NTM revenue multiples collapsed from 7.8x to 5.2x. That is a 33% compression.

This number deserves context. A 33% multiple compression means the market believes these companies’ future revenue growth has permanently diminished. For pure detection companies, that may be correct. For governance, identity, and platform companies, it represents a buying opportunity born from categorical confusion — the market failing to distinguish between companies whose core product is being commoditized and companies whose addressable market is expanding.

The error is treating “cybersecurity” as a single market. It is not. Detection, governance, identity, compliance, and incident response are distinct value chains with different dynamics. AI compresses some of them. It expands others. The selloff compressed all of them equally. That equality is the mispricing.

The Uncomfortable Middle

Here is where intellectual honesty requires some discomfort.

Anthropic’s announcement does represent a real shift. Reasoning-based vulnerability detection is a different category of capability than pattern matching. If the tool delivers on even half of its claims, it will materially reduce the value of standalone vulnerability scanning products.

The “500 vulnerabilities undetected for decades” claim, if validated, implies that the entire static analysis industry has been leaving significant value on the table. The human-in-the-loop requirement is a temporary constraint, not a permanent one. As these systems improve, the human review step will become less necessary, not more.

The optimistic case for governance companies — that their addressable market is expanding — depends on those companies actually building the agent identity, AI governance, and non-human identity management products that the market needs. The 92% of organizations whose legacy IAM systems cannot handle AI risks will not wait forever. If incumbents do not expand, new entrants will.

The selloff is overdone. The underlying shift is real.

What This Means for Practitioners

If you are running a security program at an enterprise, three things follow from this analysis.

First, separate your detection investments from your governance investments. The detection layer is being commoditized. Tools that find known vulnerability patterns will face pricing pressure from AI-powered alternatives. Plan for that. Budget accordingly. But do not cut governance, identity, and compliance investments on the assumption that “AI will handle security.”

Second, treat agent identity as a first-class security problem now. If you are deploying AI agents — and the Docker survey suggests most enterprises are or plan to — you need identity management for those agents. Not in a year. Now. The 77% of organizations without formal agent identity strategies are carrying risk they have not quantified.

Third, evaluate AI security tools the way you evaluate any other vendor claim. Anthropic’s “500 vulnerabilities” number is interesting but unverified. Ask for methodology. Ask for false positive rates. Ask for independent validation. The tool may be excellent. The claim is marketing until proven otherwise. Apply the same rigor you would to any security vendor making bold assertions.

The Signal in the Noise

Markets are pricing mechanisms. They process information and express it as price changes. The cybersecurity selloff on February 20th expressed a real signal: AI is compressing the detection value chain.

But markets also overshoot. They fail to distinguish between categories. They respond to narratives before they respond to data. And in this case, the narrative — “AI will replace cybersecurity” — is both partially correct and mostly wrong.

The correct version is more boring and more useful: AI will commoditize detection, expand the governance surface, and create entirely new security categories around agent identity and AI behavior management. The companies that thrive will be the ones that recognize governance — not detection — as the defensible layer.

The market will figure this out eventually. The organizations that figure it out first will have a structural advantage: they will invest in the expanding categories while others are cutting the wrong budgets.

Detection finds the bugs. Governance determines whether finding them matters.

If this resonates, let's talk

We help companies implement AI without losing control.

Schedule a Conversation