Governance as Advantage

Agentic AI: When Machines Don't Just Chat, But Act

TV
Thiago Victorino
9 min read

Sometime in the next 12 to 24 months, we might stop talking about artificial intelligence — not because it will cease to exist, but because it will simply be a capability we expect from machines. Like electricity or internet connectivity.

Michael Chui, McKinsey Senior Fellow, made this observation in November 2025 while discussing the latest evolution in AI: agentic systems. The prediction captures a subtle but important shift in what we expect from technology.

Two years ago, we were impressed when a language model summarized documents or wrote code. Today, the novelty is different: machines that don’t just respond, but act.

The Missing Layer

The difference between traditional AI and agentic AI isn’t in the models or processing power. It’s in a layer above: the capacity to take action.

Traditional machine learning uses data to build models that make predictions. You feed the system with characteristics of sold houses — location, bedrooms, property age — and it predicts prices. It’s useful, but passive.

Agentic AI goes further. Instead of just answering what a house would be worth, the system offers: “Want me to search for similar properties online?” And then it executes the search, summarizes results, and presents comparable options.

This difference seems small. It isn’t. It’s the difference between a calculator and an assistant.

Agentic systems can gather information from multiple sources, execute real-world transactions, complete multi-step processes, trigger workflows automatically, and collaborate with other agents. What previously required constant human supervision now happens independently.

The Hype Is Real. The Impact, Not Yet

New technology always generates excitement, especially when it does something never seen before. With generative AI, people got enthusiastic seeing it could summarize documents. With agentic AI, the promise of sending emails, checking inboxes, and clearing calendars automatically generated a new wave of excitement.

Dave Kerr, McKinsey Partner, observes: “A lot of people are saying ‘this is going to change the world,’ but we’re still not seeing that many real-world scenarios, outside of certain sectors, where things have changed a lot.”

The hype exists for valid reasons. Agentic AI corrects a fundamental limitation of LLMs: they weren’t originally designed to operate in the real world. Agents solve this problem. But capturing real value requires more than simply signing a license and saying “go operate.”

The warning is direct: many companies are building agents for the sake of building agents, forgetting tested lessons about user experience and product strategy.

When Not to Use Agents

When you have a new technology, it can be like the hammer saying: everything looks like a nail.

If you need a deterministic outcome — something that works exactly the same way every time — a rule-based system with “if-then” statements might be more suitable.

The characteristic of modern AI is that it’s non-deterministic. Sometimes it says one thing, sometimes another. That’s great for conversation, but in business situations it can be critical — for compliance reasons, for example — to have exactly the same result every time.

As Kerr puts it: “Using an LLM for credit scoring would be like using a nuclear missile to kill a fly.”

Spreadsheets and business rules have worked for decades for deterministic calculations. If the problem has a clear “if-then-else” pattern, you don’t need agents at all.

Agents are the right tool for complex tasks with high variability, judgment, and contextual interpretation. For everything else, simplicity remains a virtue.

Where It Actually Works

Two use cases show where agentic AI really delivers value:

Customer service is ideal because customers send a wide variety of questions in natural language, from the simplest to the most complex. Agents connect with proprietary knowledge bases, execute actions like shipping products or initiating returns, and allow escalation levels — level 1 solved by agent, level 2 escalates to human.

Legal field showed concrete results. Agents trained to understand how lawyers work replicate similar workflows. In documented cases, workflow time was reduced by 4x. This allowed expanding access to services with lower prices and made higher-volume, lower-margin work viable — work that wasn’t economically feasible before.

The key to success isn’t technology alone. It’s using the right engineering techniques and human-centered design to achieve massive productivity gains.

Agentic Mesh: Agents Working Together

An agentic mesh is an architectural characteristic that allows maximizing reuse of fundamental capabilities.

Just as in any organization we have people specialized in different things, we can imagine AI agents that specialize in different tasks: planning, customer interaction, logistics, data analysis.

The core concept is simple: you need some kind of technological substrate so all these agents can coordinate and talk to each other.

The benefits are clear. Multiple business lines share connections to common data sources. Different groups can discover and use what’s already been built. And the solution prevents each silo from building its own unique implementation, reducing technical debt.

The mesh isn’t just technical convenience. It’s strategic architecture that enables scale.

How Work Will Change

As we’ve seen with other technological innovations, there may be short-term concerns about jobs. But overall, agentic AI will increase economic productivity.

Humans will remain productively employed, but what we do day-to-day might look quite different.

Kerr observes changes already visible in software development: “It seems like everyone is now a tech lead. You’re not just writing code individually; you’re reviewing what’s produced, understanding how it fits into the system, and ensuring it meets standards.”

Developers accustomed to writing code now need to manage AI-assisted development. Simple tasks like initial code scaffolding or research analysis can be done quickly and at low cost.

The mindset shift is significant. From the excitement “I can write so much code” to the maturity of “every line of code is a responsibility.”

The new reality: we want as little code as possible, and it needs to fit into an overall architecture. It’s not just QA mode — it’s tech lead mode.

Managing the Risks

LLMs are non-deterministic and can have varied outputs, including confidently stated opinions that simply aren’t true. Agents can be perceived as rude or insufficiently empathetic when interacting with customers. And when different agents interact in a mesh, they can enter cycles that don’t work technically.

The recommended philosophy is straightforward: “Go slow to go fast.” We want to use these technologies — we know they’ll be an enormous value lever. But the right risk controls need to be in place.

Technical guardrails include continuous output monitoring, rule-based systems to identify unwanted content, blocking competitor names or inappropriate language, and using AI as its own guardrail — it’s easier to identify errors than to produce correctly.

There’s a paradox here. You’re using a system you don’t fully trust to evaluate another system you don’t fully trust. It’s essential as a first line of defense, but it’s not sufficient. At some point you need human evaluation.

Effective organizational practices include creating cross-functional risk committees that bring together risk, legal, and technology teams. Working with use case teams from day one. Thinking about guardrails before implementing. And carefully choosing vendors and models.

What Comes Next

Agentic AI represents a real evolution in machine capability. It’s not just hype, but it’s also not magic. It’s technology that enables systems to execute complex tasks independently — when applied to the right problems, with the right architecture and the right controls.

The question isn’t whether your organization will adopt agentic AI. It’s when, how, and for what.

Companies that can balance technological ambition with governance rigor will have significant advantage. Not because they implemented agents first, but because they implemented agents well.

And maybe, in 12 or 24 months, you won’t even think of it as “agentic AI.” It’ll just be how things work.


Victorino Group implements agentic systems with integrated governance for companies that can’t afford to get it wrong. If you want to explore how to apply this technology to your processes without the risks, let’s talk.

If this resonates, let's talk

We help companies implement AI without losing control.

Schedule a Conversation