- Home
- The Thinking Wire
- McKinsey's Two Paths Are One Question They Won't Ask
McKinsey's Two Paths Are One Question They Won't Ask
McKinsey published “Rethinking enterprise architecture for the agentic era” on March 12, 2026. The article presents two paths for modernizing enterprise architecture: incremental integration (adding agentic AI on top of legacy systems) or comprehensive transformation (rebuilding from the ground up). It introduces an orchestration layer called “agentic mesh.” It cites two anonymous case studies. It mentions security zero times.
This is the fifth McKinsey article we have tracked in this series. The fourth analyzed their agent factory prescription. The third documented how three separate McKinsey publications converged on governance without naming it. The pattern continues. The diagnosis keeps improving. The prescription keeps stopping at the same boundary.
The False Binary
McKinsey frames the choice as incremental versus comprehensive. Incremental means adding agentic AI on top of existing systems. Comprehensive means rebuilding the architecture from scratch. The article positions these as a spectrum, then presents evidence that favors the expensive end.
This is an anchoring technique. Present two options, make one sound cautious and the other ambitious, then fill the article with case studies that validate the ambitious one. By the time the reader finishes, “comprehensive transformation” feels like the responsible choice.
But the framing conceals the actual question. The difference between organizations that succeed with incremental integration and those that need comprehensive transformation is not their technology stack. It is their governance maturity.
An organization with strong measurement infrastructure, clear decision accountability, and defined agent boundaries can integrate agentic AI incrementally. It already has the control surfaces. An organization without those things will fail regardless of which path it picks, because neither path provides the governance layer that determines success or failure.
McKinsey’s own data supports this reading. They report that over 80% of organizations see no material bottom-line impact from generative AI. We analyzed this statistic in our fourth article. The failure is operational, not architectural. Organizations deploy AI without knowing how to measure whether it works. Rebuilding the architecture does not fix that problem. It makes it more expensive.
McKinsey Discovers the Agentic Mesh
The article names “agentic mesh” as the key enabler for both paths. We published a detailed analysis of the agentic mesh concept and its six-layer architecture. McKinsey’s usage validates the concept we defined, but their framing is narrower. Where the IEEE reference architecture specifies identity, governance, observability, and communication layers, McKinsey reduces the mesh to an orchestration mechanism. The governance and security layers are absent from their description.
This omission matters because the mesh concept was born from a governance need. Eric Broda coined the term in November 2024 to describe how autonomous agents in an enterprise could discover, communicate, and collaborate under policy constraints. The “under policy constraints” part is the point. Without it, an agentic mesh is just another integration layer with a new name.
McKinsey treats the mesh as plumbing. The architects who defined it treat it as a governance framework that happens to include plumbing. The difference is not semantic. It determines whether the mesh prevents the failure modes or merely connects the components that produce them.
The Case Studies That Cannot Be Checked
The article offers two supporting cases. A European bank deployed atomic agents for corporate credit processing. A Latin American bank spent $600 million, built over 100 AI systems, reduced engineering time by 60%, and saved $250 million.
Both are anonymous. Neither can be independently verified. This is the same evidentiary pattern we identified in our fourth analysis: McKinsey cites results from its own engagements to recommend more engagements. The commercial circularity is structural.
The Latin American bank numbers deserve scrutiny. A $600 million budget with $250 million in savings is a 42% return. A 60% reduction in engineering time implies that 100 engineers became equivalent to 40, or that workload dropped by more than half. These figures, if real, would represent one of the largest verified AI productivity gains in enterprise history. Yet no organization name, no methodology, and no independent verification.
For context: the NBER’s February 2026 survey of roughly 6,000 executives found that 89% reported zero labor productivity impact from AI. Gartner predicts that 40% of agentic AI projects will face cancellation by 2027. Cognition AI found multi-agent architectures “fragile” in production. Anthropic documented 15x token consumption increases in multi-agent setups. A single anonymous case study claiming 60% efficiency gains does not override converging independent evidence that most organizations experience nothing close to that.
Zero Mentions of Security
The article discusses enterprise architecture for autonomous AI agents and mentions security zero times.
This is not a minor omission. Astrix Security’s 2025 research found that 88% of organizations have experienced AI-related security incidents. Non-human identities (API keys, service accounts, agent credentials) outnumber human identities by a ratio of 50 to 1 in most enterprises. As we explored in The Architecture of Agent Trust, agent reliability comes from environmental constraints, not behavioral instructions. An enterprise architecture that introduces autonomous agents without specifying their security boundaries is an architecture that increases attack surface by design.
McKinsey’s own internal experience illustrates the risk. Their internal AI platform, Lilli, was compromised by an autonomous agent in under two hours during a red-team exercise. The firm that prescribes enterprise-wide agent deployment could not secure its own agent platform against a simulated attack.
Singapore’s IMDA launched the first government framework for agentic AI governance in January 2026. It introduced “action-space” as a governance primitive: the bounded set of actions an agent is permitted to take, defined before deployment. This is precisely the kind of architectural specification that an enterprise architecture article should contain. McKinsey’s article contains none.
The Circular Governance Argument
The article does mention governance, briefly. It argues that comprehensive transformation simplifies governance because you build a comprehensive governance platform as part of the transformation. Read that again. The argument is circular: comprehensive transformation is better for governance because comprehensive transformation includes governance.
That is a tautology, not a finding. Any approach that includes governance will be better for governance than an approach that does not. The question is what governance means in practice, what it requires, and how it works. The article does not answer any of these.
Our series has tracked this pattern across five articles now. Measurement problems (article one). Design problems (article two). The word “governable” used once without definition (article three). Agent factories prescribed without governance infrastructure (article four). Enterprise architecture redefined without security or governance specification (article five). Each installment acknowledges governance more explicitly. None specifies what governance infrastructure actually requires.
Enterprise Architecture Is Not a Blueprint Anymore
There is one insight in the McKinsey article worth extracting. Multiple independent sources, not just McKinsey, are converging on the idea that enterprise architecture must shift from a static blueprint to a living operating system. Traditional EA produced reference diagrams that described how systems should connect. Agentic EA must produce runtime policies that govern how agents actually behave.
This shift is real and important. But McKinsey frames it as an argument for comprehensive transformation (which McKinsey sells) rather than as an argument for governance infrastructure (which requires ongoing organizational capability, not a consulting engagement).
The consulting model and the governance model remain structurally incompatible, as we noted in our third analysis. Consulting firms sell projects with defined scopes and end dates. Governance does not end. This structural tension explains why McKinsey keeps arriving at the governance boundary and stopping.
What Matters Here
Gartner’s prediction is the most honest assessment of the current moment: 40% of enterprises will adopt AI agents by 2027, and 40% of those projects will be canceled. Simultaneous mass adoption and mass failure. The organizations that survive the filter will be the ones that built governance infrastructure before they scaled.
If you are evaluating enterprise architecture for AI agents, three questions matter more than the incremental-versus-comprehensive choice.
Can you measure agent outcomes independently? Not agent activity. Not deployment count. Actual business outcomes caused by agent decisions. If you cannot isolate the signal, the architecture choice is irrelevant because you will not know whether it worked.
Can you define agent boundaries before deployment? Every agent needs a bounded action-space: what it can access, what it can modify, what requires human approval. If your architecture does not specify these boundaries, you are building attack surface.
Can you audit agent behavior in production? When an agent makes a decision at 3 AM that affects a customer, can you reconstruct why? If the answer is no, you have a liability, not an architecture.
These are governance questions. They do not depend on whether you choose incremental integration or comprehensive transformation. They depend on whether you build the control infrastructure that makes either path viable.
McKinsey will likely specify this infrastructure in a future article. The trajectory across five publications points there. Until then, the pattern holds: the diagnosis improves, the prescription stays the same. Buy a bigger transformation. Rebuild from the ground up. Trust the case studies you cannot verify. And hope that governance takes care of itself.
It will not.
This analysis synthesizes McKinsey’s “Rethinking Enterprise Architecture for the Agentic Era” (March 2026), the NBER executive survey on AI productivity (February 2026), Gartner’s agentic AI project forecast (2025), IMDA’s agentic AI governance framework (January 2026), and Strata’s AI agent identity research (2025).
Victorino Group builds governance infrastructure for enterprise AI, the layer between agent deployment and agent accountability. Let’s talk.
All articles on The Thinking Wire are written with the assistance of Anthropic's Opus LLM. Each piece goes through multi-agent research to verify facts and surface contradictions, followed by human review and approval before publication. If you find any inaccurate information or wish to contact our editorial team, please reach out at editorial@victorinollc.com . About The Thinking Wire →
If this resonates, let's talk
We help companies implement AI without losing control.
Schedule a Conversation