When the Vendor Says 'Don't Let AI Do the Synthesis'

TV
Thiago Victorino
5 min read
When the Vendor Says 'Don't Let AI Do the Synthesis'
Listen to this article

On April 23, 2026, incident.io published a piece titled “What Does Using AI for Post-Mortems Actually Mean?”

The byline is the part most readers will skip. Don’t. The author works at incident.io. The company sells an incident-response and post-mortem platform with AI integrated into the workflow. They have a product to sell, a market to expand, and every commercial reason in the world to claim AI does more.

They didn’t.

Instead, the piece draws a deliberate line down the middle of a post-mortem and labels each side. On one side: what AI should do. On the other: what AI should not do, even though the vendor could ship a feature that does it. That line, drawn by the vendor against the grain of its own pricing page, is the most useful governance artifact published this month.

The Line, Where It Is

incident.io’s framing splits post-mortems into two phases.

Compression is the first phase. Pulling timelines from logs and chat threads. Drafting first-pass narratives. Surfacing gaps in the evidence. Detecting patterns across past incidents the on-call engineer never lived through. This is the phase where AI saves hours and produces work that humans would have produced anyway, only slower. The vendor’s position: ship this, lean into it, do it well.

Synthesis is the second phase. Reasoning about causal relationships. Distinguishing the trigger from the underlying condition. Prioritizing follow-ups against finite engineering capacity. Identifying the organizational issues hiding under the technical ones. Defending conclusions when a senior engineer pushes back in review. This is the phase the vendor explicitly does not want AI replacing.

The line between them is not technical. The compression work and the synthesis work both run on the same models. The line is about what an organization is willing to outsource and what it is not. The vendor is telling its own customers where to stop.

The Sentence Worth Quoting Back

The piece contains a single sentence that platform leaders should paste into the next vendor evaluation deck:

The most dangerous AI-assisted post-mortem isn’t the one that’s obviously wrong. It’s the one that sounds exactly right, but was produced without anyone doing the real thinking.

Read it twice. The argument is not that AI produces bad post-mortems. The argument is that AI produces plausible post-mortems, and plausibility is the dangerous shape. A wrong post-mortem gets caught in review. A polished, well-structured, internally consistent post-mortem that nobody actually reasoned through gets filed, distributed, and used to set the next quarter’s roadmap. The damage compounds because the artifact looks like the work was done.

This is the failure mode that AI specifically introduces. Pre-AI, a polished post-mortem was evidence of effort because polish was expensive. Post-AI, polish is free. The proxy that humans used for “someone thought about this” no longer holds. The vendor is naming the proxy collapse and asking customers to replace it with something else.

Why Vendor Self-Restraint Is the Strongest Signal

Procurement teams spend a lot of energy reading vendor marketing copy for what it claims. They spend almost no energy reading it for what it deliberately does not claim. The second is more informative.

Most AI vendors selling into a category will push the boundary of what their product does as far as the buyer will tolerate. Demos showing the AI handling the full task end-to-end. Case studies framed as autonomous outcomes. Pricing tiers that scale with how much human work the AI replaces. The commercial gravity points one direction.

When a vendor in that environment publishes a piece saying “here is what our category of AI should not do,” they are paying a real cost. They are narrowing their own addressable market in writing. They are giving competitors a quote to use against them. They are telling their own sales team which deals not to chase. That cost is the credibility. A vendor who cannot articulate what their AI should not do is a vendor who has not thought about it, or has thought about it and chosen not to say.

We covered the SRE-grade discipline arriving at AI labs in Postmortem Culture Just Reached AI. incident.io’s piece sits one layer up. It is not about how a model lab runs its own post-mortems. It is about how a vendor selling post-mortem AI bounds the product before the buyer has to.

The Practical Audit

The procurement-grade move on this artifact is a one-page audit anyone can run in an afternoon.

Pull incident.io’s piece. Pull the marketing pages of every AI vendor in your stack that touches a synthesis-shaped task — post-mortems, root cause analysis, security incident triage, executive summary generation, performance review drafting, customer escalation analysis. For each vendor, ask three questions.

First, does the vendor publicly distinguish compression from synthesis in their own work? Or does the marketing copy collapse the two and imply the AI does both?

Second, where the vendor does claim synthesis, what review structure do they recommend? A vendor confident in synthesis-quality output names the human review step explicitly. A vendor that omits the review step is selling autonomy, not assistance.

Third, what does the vendor’s own product team say about its limits, in writing, on a domain they own? incident.io has now answered this for post-mortems. The vendors who haven’t owe their customers an answer.

The gap between incident.io’s published self-restraint and any other vendor’s silence on the same question is the governance signal worth pricing into the next renewal. Not as a marketing exercise. As a procurement input. The vendor who tells you what their AI shouldn’t do is the vendor who has actually run their AI in production long enough to know.

This connects to a pattern we have been tracking in the AI SRE reverification loop and agent monitoring at scale: the maturity signal in AI is not what the model can do, it is the operator’s clarity on what the model should not be allowed to do unsupervised. Compression work scales. Synthesis work scales only if a human stays in the loop. A vendor saying that out loud is rarer than it should be.

Do This Now

Read the incident.io piece end-to-end. Then pull the marketing page of the AI vendor whose product most overlaps with synthesis work in your stack. Read them side by side. The gap between what incident.io says AI shouldn’t do and what your other vendor implies AI does is your governance audit, written for you, in the vendor’s own words.

If the gap is wide, you are pricing in autonomy you did not ask for. If the gap is narrow or the other vendor has published the same scope limit, you have found a vendor who knows their product. Either way, the document you needed already exists. It just needed someone in the category willing to publish against their own incentives.

incident.io did. Use it.


Sources

Victorino Group helps platform leaders price vendor self-restraint into AI procurement decisions. Let’s talk.

All articles on The Thinking Wire are written with the assistance of Anthropic's Opus LLM. Each piece goes through multi-agent research to verify facts and surface contradictions, followed by human review and approval before publication. If you find any inaccurate information or wish to contact our editorial team, please reach out at editorial@victorinollc.com . About The Thinking Wire →

If this resonates, let's talk

We help companies implement AI without losing control.

Schedule a Conversation