- Home
- The Thinking Wire
- When Your Customer Is an Algorithm: The Governance Problem in Agentic Commerce
When Your Customer Is an Algorithm: The Governance Problem in Agentic Commerce
Twenty-three percent of Americans bought something through AI in the past month. ChatGPT e-commerce traffic grew 693% during the 2025 holiday season. Morgan Stanley projects $190B to $385B in agentic commerce by 2030.
These numbers describe a market forming. They do not describe who controls the product narrative when the customer is a machine.
That is the question nobody is answering yet.
The protocol fragmentation problem
On March 12, Azoma unveiled the Agentic Merchant Protocol, a framework for brands to specify how AI agents should represent their products. The premise is reasonable: if AI agents are going to sell your products, you should have some say in how they describe them.
In January, we analyzed the protocol layer when Google, Shopify, and Walmart launched the Universal Commerce Protocol. UCP solves the N-by-N integration problem between merchants and AI surfaces. OpenAI followed with the Agent Commerce Protocol, a platform-mediated approach. Now Azoma adds a third layer: brand-centric governance of product representation.
Three protocols in three months. Each solving a different slice of the same problem. None solving the governance question underneath.
UCP handles plumbing: how agents discover products, process payments, manage orders. ACP handles access: which agents can transact through which platforms. AMP handles representation: how products should be described, what claims are authorized, what comparisons are permitted.
What none of them handle is accountability. When an AI agent misrepresents your product to a consumer, which protocol failed? When an agent recommends a competitor based on data you cannot see or audit, where does your governance framework intervene? When three protocols produce conflicting instructions about how to represent the same product, which one wins?
From human attention to machine attention
For two decades, digital commerce operated on a simple premise: optimize for human attention. SEO, paid search, product photography, conversion rate optimization. All designed for a human brain scanning a screen.
Agentic commerce inverts this. The entity evaluating your product listing is not a person. It is a language model parsing structured data, weighing attributes against a user’s stated preferences, and making purchase decisions (or strong recommendations) before a human ever sees the product.
This is not a gradual shift. It is a category change in who your buyer is. Half of LLM users already research and compare prices through AI. Adobe measured the 693% traffic growth not as a trend line but as a step function during a single holiday season.
The implications are structural. Human buyers can be influenced by brand storytelling, visual design, emotional resonance. Machine buyers parse structured attributes, evaluate factual claims, and compare on dimensions the brand may not control. The entire apparatus of brand marketing was built for an audience that is being replaced by an intermediary with different evaluation criteria.
Terakeet coined the term “evidentiary asymmetry” to describe what happens next: brands cannot see how AI agents represent their products to consumers. The agent receives your product data, combines it with competitive data, applies its own reasoning, and presents a recommendation. You never see the recommendation. You never know why your product was ranked second, or fifth, or excluded entirely.
As we found in our analysis of LLM citation patterns, the way language models select what to surface is predictable, biased, and exploitable. The same dynamics apply when an LLM selects which product to recommend. Position in the data, entity density, definitional clarity: these structural factors influence machine attention the same way they influence citation behavior. The difference is that in commerce, the stakes are revenue, not just visibility.
The governance layer that does not exist
CB Insights maps 90+ companies in the agentic commerce space. Most are optimizing: better product data feeds, improved AI-readable descriptions, analytics for agent-driven traffic. Optimization is table stakes. Governance is the missing layer.
Consider what happens when you optimize without governing.
A brand publishes product data through AMP, specifying authorized claims, approved comparisons, and representation rules. The data flows to an AI agent through UCP. The agent, following its own model’s reasoning, combines that data with reviews, competitor claims, and user preferences. It surfaces a recommendation that contradicts the brand’s approved messaging. The brand never sees this. The consumer never questions it.
PwC’s 2025 data shows 64% of consumers require at least one safeguard before authorizing AI purchases. HBR identified five specific scenarios where agentic commerce breaks trust: misunderstanding user intent, unauthorized actions, data sensitivity violations, brand misrepresentation, and failed recovery from errors. These are not edge cases. They are the predictable failure modes of systems operating without governance frameworks.
The optimization companies (and Azoma is one of them, whatever the protocol language suggests) are selling the equivalent of SEO for machine attention. Some call it Agentic Commerce Optimization. The framing is borrowed from search engine optimization, and it inherits the same limitation: optimizing for an intermediary’s behavior without governing the intermediary itself.
What Azoma gets right and what it obscures
Azoma’s core insight is correct: brands need machine-readable specifications for how their products should be represented. Product data syndication has been a solved problem for years (Salsify has raised over $500M doing exactly this; Syndigo and Akeneo compete in the same space). What changes in an agentic context is that the data consumer is autonomous, not just a catalog system displaying what you provide.
The AMP adds representation rules: approved claims, authorized comparisons, compliance constraints. This is useful. It is also not technically novel. It is product data syndication with governance metadata.
The performance claims require scrutiny. All of Azoma’s data is self-reported. One cited case study, PerfectTed’s 532% growth, coincided with a Dragons’ Den investment and retail expansion. Attributing that growth to AI optimization is misleading at best.
The protocol also faces an existential dependency. AMP produces data that AI agents must choose to consume. If OpenAI’s ACP and Google’s UCP do not integrate AMP data into their agent workflows, the protocol is a specification that nobody reads. This is not a flaw in the design. It is a market reality: brand-centric governance only works if the platforms with the users adopt it.
RegGuard, Azoma’s trademarked compliance feature, has no disclosed technical details. A trademark is not a technology.
The real competitive question
The firms that will win in agentic commerce are not the ones with the best-optimized product feeds. They are the ones with governance frameworks that answer four questions:
What are agents allowed to say about us? Not just product attributes. Approved claims, comparison boundaries, pricing rules, promotional constraints. Machine-readable, version-controlled, auditable.
How do we detect misrepresentation? If an AI agent tells a consumer something incorrect about your product, how long until you know? Today, most brands have zero visibility into agent-mediated conversations about their products. This is the evidentiary asymmetry problem, and no protocol currently solves it.
Which protocols do we support, and how do conflicts resolve? Three protocols means three potential sources of conflicting instructions. A governance framework must define precedence: what happens when UCP checkout data contradicts AMP representation rules? Who arbitrates?
Who is accountable when the agent gets it wrong? The AI agent? The platform? The brand that provided the data? The protocol that transmitted it? Accountability in agentic commerce is currently undefined. This will change when the first major misrepresentation lawsuit lands.
The pattern we have seen before
This fragmentation is not unique to commerce. Enterprise AI tooling went through the same cycle: competing standards, optimization before governance, accountability distributed so widely that nobody owns it.
The organizations that handled that transition well were the ones that built governance frameworks before choosing tools. They defined what agents could do, what data they could access, what decisions required human approval. Then they selected protocols and platforms that fit within those boundaries.
Agentic commerce is following the same trajectory. The optimization layer is forming fast. The protocol layer is fragmenting. The governance layer is empty.
The $385B question is not which protocol wins. It is whether your organization has a governance framework that works regardless of which protocol wins. Protocols change. Standards consolidate. The governance discipline you build now is the asset that compounds.
Build the governance first. Then optimize.
This analysis synthesizes VentureBeat’s coverage of Azoma’s Agentic Merchant Protocol (March 2026), Morgan Stanley Research on agentic commerce (December 2025), HBR’s five trust-breaking scenarios in agentic commerce (February 2026), and PwC’s consumer trust data on AI purchasing (2025).
Victorino Group helps organizations build governance frameworks for AI systems, including the agentic commerce layer. Let’s talk.
If this resonates, let's talk
We help companies implement AI without losing control.
Schedule a Conversation