- Home
- The Thinking Wire
- Advertising Discovers Governance. Two Years Late.
A roundtable of advertising industry executives sat down with Campaign Asia last month and used a word that would have been foreign to them two years ago: governance.
Not “brand safety.” Not “content moderation.” Governance. The deliberate construction of rules, boundaries, and accountability structures for AI systems that make decisions on behalf of brands. Engineering teams have been building this for years. Advertising is just now arriving at the same realization.
The timing is telling. Meta’s Advantage+ and Google’s Performance Max already run what are, functionally, agentic systems. They select audiences, set bids, generate creative variations, and allocate budgets across channels. Marketers configure them. The systems decide. And the governance infrastructure around those decisions? Sporadic at best.
The policy layer is the hard part
Anudit Vikram, CPO of Channel Factory, cut through the usual protocol debates with a line worth repeating: “The real technical work is not the protocol layer. It is defining a policy and decisioning layer that determines what the agent is allowed to do.”
This is the same conclusion engineering teams reached around 2024. The model works. The API works. The integration works. What doesn’t work is the layer that says “yes, but not for this customer segment” or “yes, but only if a human reviews the creative first” or “never for these product categories on this platform.” The constraint layer. The judgment layer.
Engineering built that layer through painful experience: production incidents, rollbacks, post-mortems, gradually codified into runbooks and escalation protocols. Advertising doesn’t have that muscle memory. As we explored in Marketing Agent Governance: What Klaviyo’s Composer Reveals, even vendors shipping governance controls alongside their agents leave the hard policy questions to the customer. The enforcement mechanism exists. The policies don’t.
Intimacy changes the calculus
Richard Raddon, co-CEO of Zefr, made an observation that deserves more attention than it received: LLM-mediated conversations are “way more intimate” than search.
He is right, and this changes the brand safety equation fundamentally. A search ad appears alongside results. It is contextual but arm’s length. An ad surfaced inside a ChatGPT conversation appears within something that feels like a dialogue. The user is thinking out loud. They are asking about medical symptoms, financial anxieties, relationship problems. The conversational context carries emotional weight that a search results page never did.
OpenAI launched its advertising pilot with Criteo, requiring a $200,000 minimum buy. Early results: a 0.91% click-through rate versus Google’s 6.4% benchmark. The performance numbers will improve. The governance question won’t resolve itself. When your brand appears inside a conversation about a user’s cancer diagnosis or child custody dispute, “brand safety” as traditionally defined (avoiding placement next to objectionable content) is not sufficient. The content is the user’s own vulnerability.
Raddon again: “Responsible AI has to be a marketer’s concern.” Not an engineering concern. Not a compliance concern. A marketer’s concern. Because the marketer understands the brand implications that no technical filter can evaluate.
The measurement vacuum
Nada Bradbury, CEO of AD-ID, described current ROI measurement for AI-mediated ad placement as “sporadic and minimal.” Performance data arrives via weekly CSV spreadsheets.
Let that sink in. Autonomous systems making millions of placement decisions per day, measured weekly through spreadsheets.
In engineering, this would be unacceptable. Production systems without real-time observability are production systems without governance. You cannot govern what you cannot see. If an agent is making placement decisions at machine speed but accountability operates at spreadsheet speed, the delta between action and oversight is where the damage accumulates.
This is the same observability problem that engineering solved with monitoring infrastructure, dashboards, alerting, and audit trails. Advertising needs equivalent tooling. Not adapted engineering tools, but purpose-built measurement systems that capture placement context, audience composition, creative variation performance, and brand exposure risk in something closer to real time.
Regulation is not coming
Raddon offered one more prediction worth taking seriously: “I don’t think there’s going to be any appetite to regulate these companies at all.”
Whether you agree with that assessment politically, the operational implication is clear. Governance must be self-imposed. No external force will compel platforms to build brand-safety-aware agent systems. No regulator will mandate that LLM ad placement include escalation protocols. If governance happens, it happens because brands and their agencies demand it.
This is consistent with what we see across every domain deploying AI agents. In agentic commerce, three competing protocols emerged in three months and none solved the accountability question. In enterprise agent pricing, even Salesforce couldn’t standardize governance as an economic dimension. The pattern repeats: capabilities ship fast, governance follows slowly, and the organizations that wait for external standards pay the price in incidents.
Vikram said it directly: “Governance cannot be layered on after the fact.” This is the engineering lesson that advertising now gets to learn from scratch, or borrow.
What borrowing looks like
The good news is that advertising doesn’t need to invent governance from zero. The underlying questions are identical to those engineering answered:
What can the agent decide alone? Which placement decisions, bid adjustments, and creative selections can run without human review? The answer varies by brand, by campaign, by audience sensitivity. That is the point. Governance is not a universal ruleset. It is a decision framework that encodes specific organizational judgment.
Where must a human intervene? When an AI-mediated placement targets a sensitive demographic, enters a new platform context (like conversational AI), or exceeds a budget threshold, what is the escalation path? Who reviews? How fast?
How do you audit after the fact? When a campaign runs across millions of placements, how do you reconstruct which decisions the agent made, why it made them, and whether those decisions aligned with policy? This requires logging, tracing, and review infrastructure that most advertising technology stacks lack entirely.
How do you update boundaries? Agent capabilities improve continuously. The policies constraining them must evolve too. A quarterly policy review is already outdated by the time it concludes.
Engineering solved these problems imperfectly but functionally. The patterns transfer. The question is whether advertising leadership recognizes the structural similarity or insists on treating AI governance as a novel problem unique to their domain.
The two-year penalty
Engineering started building governance infrastructure around 2024. Advertising is discovering the need in 2026. That two-year lag carries a cost.
Two years of campaigns running through autonomous systems without explicit governance policies. Two years of placement decisions made at machine speed with spreadsheet-speed accountability. Two years of brand exposure in conversational AI contexts that no brand safety framework was designed to evaluate.
The organizations that close this distance fastest will do so by recognizing that AI governance is not a technology project. It is an operational discipline. The technology (policy engines, escalation workflows, audit systems) is a means. The discipline is the end. And discipline, unlike technology, cannot be purchased. It must be built, practiced, and maintained.
Advertising has discovered governance. The question now is whether it treats this discovery as a feature request or a foundation.
This analysis synthesizes Campaign Asia’s AI-Mediated Advertising Roundtable (March 2026), featuring Anudit Vikram (Channel Factory), Richard Raddon (Zefr), and Nada Bradbury (AD-ID), alongside OpenAI’s Advertising Pilot Launch with Criteo (March 2026).
Victorino Group helps enterprises build AI governance that works across engineering, marketing, and every function in between. Let’s talk.
All articles on The Thinking Wire are written with the assistance of Anthropic's Opus LLM. Each piece goes through multi-agent research to verify facts and surface contradictions, followed by human review and approval before publication. If you find any inaccurate information or wish to contact our editorial team, please reach out at editorial@victorinollc.com . About The Thinking Wire →
If this resonates, let's talk
We help companies implement AI without losing control.
Schedule a Conversation