Datadog Just Turned Governance Into a Product Roadmap

TV
Thiago Victorino
6 min read
Datadog Just Turned Governance Into a Product Roadmap
Listen to this article

One release is a feature. Two in a week is a roadmap.

In April 2026, Datadog shipped two AI-governance products within days of each other. The first is a Code Security MCP that plugs into the developer’s IDE and scans AI-generated code as it lands — SAST, Software Composition Analysis, secrets detection, and Infrastructure-as-Code scanning consolidated behind a single local MCP server with unified auth. The second is an open-source, AI-native SAST tool that uses LLMs instead of static rules to cut false positives.

Read them separately and each looks like a feature announcement. Read them together, on the same week, from the same vendor, and something else is going on: a large observability company has decided that governance of AI-generated code is a product category worth owning, and it is racing to plant the flag at the surface where developers actually work.

That surface is not the audit meeting. It is the IDE.

The MCP, Read as a Product Bet

The Code Security MCP does a very specific thing that quarterly audit vendors cannot do. It “downloads scanners on demand at the start of each session,” so the tools running against your code are always current — no version pinning, no patch cycles, no security-tooling drift. The developer opens the IDE; the MCP brings the freshest scanners with it; the session runs against rules that may not have existed last Tuesday.

What it blocks is the boring list that governance leaders have been reciting for a decade — SQL injection and similar injection flaws, vulnerable third-party imports, hardcoded credentials, IaC misconfigurations. The novelty is not the categories. It is the location. Those checks used to fire at commit, or pull request, or nightly pipeline, or quarterly audit, or (the honest answer for most shops) after a breach. They now fire inside the turn where the AI agent is writing the line.

The governance implication is structural. If the security scanner lives where the code is being typed — and if it refuses to let the vulnerable pattern survive the session — then the window in which a bad pattern can propagate through a codebase shrinks from weeks to seconds. The audit does not catch it later because there is no later. The audit happens at the IDE, or it does not happen at all.

This is the same architectural move we wrote about in the advertising governance frontier: brand-safety checks moved from post-hoc content review to pre-flight gates inside the ad platforms themselves. Governance migrates toward the surface where the work is generated, because that is the only place where the cost of a bad output is still cheap to fix.

The Open-Source SAST, Read as a Category Claim

The second release is easy to misread as a research project. It is not. Datadog open-sourced an AI-native SAST tool that uses LLMs to evaluate code patterns and, according to their own blog, achieves “significantly fewer false positives” than traditional rule-based scanners.

False positives are the reason security tooling gets ignored. Every false alarm trains developers to dismiss the next alert, and the next alert is the one that mattered. The rule-based SAST industry has been fighting false positives with increasingly elaborate rule engines for twenty years. Datadog’s pitch is that the LLM reads the code the way a senior reviewer does — with context — and therefore does not trip on the patterns that look dangerous but are not.

Whether that claim holds up in adversarial conditions is a separate question. The move that matters is releasing it open source, the same week as the commercial MCP. The open-source tool is the category primer. It plants the idea — AI should be scanning AI — in the head of every security engineer who reads the repo, and it does so under Datadog’s name. Six months from now, when those same engineers are asked to choose a paid solution, the shape of the problem will already feel like Datadog’s shape.

This is how product categories get built. Not with one launch, but with a free version and a paid version shipped close enough together that the market reads them as a thesis.

Governance as Product Surface

The thesis is that governance is not an audit function anymore. It is a product surface, and the surface is the IDE.

We have argued this in other domains — marketing agents need runtime governance, not policy PDFs; advertising brand safety is moving into the ad platforms themselves. Datadog’s April releases are the same thesis in the security domain: governance checks that used to live in tickets, dashboards, and quarterly reviews are being pulled into the turn where the work happens. The tool that stops a SQL injection at commit time is a governance tool. The tool that stops it while the developer is still typing is a product feature.

The two are not the same thing, even though they catch the same bug. The governance tool lives in a different org, runs on a different cadence, and is funded out of a different budget. The product feature lives in the IDE, runs every session, and is paid for by the platform team that ships to developers. Datadog is betting that the second one is the bigger market, and that the first one is a shrinking remnant that will eventually be absorbed.

If that bet is right — and two releases in a week is a signal that Datadog believes it strongly — then the question for platform leaders is no longer “do we have a governance program.” It is: where in our stack does governance become a product surface, and who owns the roadmap? If the answer is still “the security team files a ticket,” the answer is obsolete. If the answer is “the IDE refuses to let it ship,” there is still work to do on who builds that refusal and how it gets paid for.

The One-Sentence Read

Governance is not moving to the cloud. It is moving to the cursor.

The Datadog releases matter because they make that move visible in a domain — application security — that has spent twenty years trying to do the opposite, pushing checks further downstream so that developers would not have to think about them. The new direction is the reverse. Push the check as close as possible to the moment the code is written. Make it a product. Give it a roadmap. Ship two of them in the same week so the market understands it is not a feature.

Teams still treating AI governance as a compliance artifact — a PDF, a quarterly review, a training module — are not wrong about the risk. They are wrong about the venue. The venue is the IDE, and the vendors who ship to the IDE are going to own the governance story whether the governance teams agree or not.


The right response is not to compete with Datadog on scanner features. It is to ask, for every AI-adjacent surface in your own stack, a single question: where does the governance check belong, and is it close enough to the moment of generation that a bad output never gets a chance to propagate? If that distance is measured in hours or days, the MCP model has already passed you. If it is measured in turns, you are in the new category.

Sources: Datadog, “Introducing the Datadog Code Security MCP,” April 2026 (https://www.datadoghq.com/blog/introducing-datadog-code-security-mcp/); Datadog, “Introducing Our Open Source AI-Native SAST,” April 2026 (https://www.datadoghq.com/blog/open-source-ai-sast/). Two releases, one week, one thesis: governance belongs where the work happens, and the work happens in the IDE.

Victorino Group helps platform teams turn governance from audit overhead into product surface. Let’s talk.

All articles on The Thinking Wire are written with the assistance of Anthropic's Opus LLM. Each piece goes through multi-agent research to verify facts and surface contradictions, followed by human review and approval before publication. If you find any inaccurate information or wish to contact our editorial team, please reach out at editorial@victorinollc.com . About The Thinking Wire →

If this resonates, let's talk

We help companies implement AI without losing control.

Schedule a Conversation