AEO Is Already Commoditized. The Durable Play Is Governing What AI Trusts.

TV
Thiago Victorino
8 min read
AEO Is Already Commoditized. The Durable Play Is Governing What AI Trusts.
Listen to this article

Omniscient Digital has run dozens of experiments testing commonly recommended AEO tactics. Headers as questions. Schema markup. Content chunking for AI extraction. The result: no significant effect size on AI search or traditional SEO.

The tactics that showed promise did so briefly, with weak effect size, and often reversed within weeks.

This finding, published by Alex Birkett in his April 2026 essay “Where the Puck Is Going in AI Search”, should not surprise anyone paying attention to how platforms evolve. But it should alarm anyone whose AI visibility strategy starts and ends with content formatting.

The Red Queen problem, applied

Every competitor can run the same first-wave AEO playbook. Structured content for AI extraction. Self-promotional listicles. Automated brand mention outreach. Question-and-answer formatting so models can pull clean snippets.

As Omniscient’s Director of Organic Growth Strategy Ben put it: “When everyone gains access to the same efficiency, the efficiency itself stops being useful as a differentiator. You’re running faster just to stay in place.”

This is the Red Queen race from evolutionary biology. The organisms that survive are not the ones that run fastest on the same track. They are the ones that change the game entirely.

Birkett’s article identifies four areas where the game changes. Each shares a common trait: they are expensive to produce and difficult to counterfeit.

Zahavi’s handicap principle as an AI trust framework

The biologist Amotz Zahavi proposed the handicap principle in 1975. The peacock’s tail persists because it is metabolically costly. Only a genuinely strong bird can afford the waste. The cost is the proof.

Birkett applies this logic to AI search: when every brand can format content for extraction, formatting stops being a signal. The signals that remain are the ones that require genuine investment. Four stand out.

Verified review ecosystems. G2 is acquiring Capterra, Software Advice, and GetApp, consolidating 6 million verified reviews and 200 million annual software buyers onto one platform. You cannot prompt your way into 500 five-star G2 reviews. The cost of entry is having a product people genuinely like and a customer base willing to say so publicly.

Creator and influencer networks. PartnerStack now integrates with AI visibility tools like Evertune to identify creators whose content already appears in AI citation sources and broker partnerships between those creators and brands. If AI systems already cite a creator’s content, and that creator recommends your product, you inherit their citation authority. Birkett notes this channel is currently underpriced because the market has not calibrated the value of creator placements for AI search the way it calibrated affiliate links for web search.

Authentic community participation. Not astroturfing on Reddit. Real humans participating in communities where buyers congregate, over months and years, within the community’s own norms. Reddit threads dominate AI citations. The signal is sustained, genuine engagement. A company with thousands of negative Reddit threads has a product problem, not an AEO problem.

Original research and proprietary data. You cannot ethically prompt a dataset into existence. Original research with real methodology and novel findings earns links, mentions, and off-page authority that compounds. The secondary effects (social content, sales decks, webinar material) multiply the initial investment.

Each of these signals is hard to fake precisely because it is expensive to produce. That is the point.

The governance layer hiding in plain sight

Here is where Birkett’s analysis connects to a pattern we have been documenting across multiple pieces.

We showed in our platform coupling analysis that 45.2 million citations reveal AI search is shaped by licensing deals, not content quality. Grok cites X at 99.7%. ChatGPT favors Reddit because of a $70M/year deal. The information substrate follows the money.

Birkett’s hard-to-fake signals operate on a different axis. Platform coupling determines which sources AI models can access. Hard-to-fake signals determine which sources AI models choose to trust among those they can access.

These are complementary governance problems. An organization needs to solve both: be present on platforms AI models can reach (the distribution problem we covered in the platform coupling essay), and build the trust signals that earn citation preference once you are reachable (the credibility problem Birkett maps).

We explored a related angle in making products agent-ready. Agent discovery is a governance decision because what you expose to AI systems shapes what they represent about you. Birkett’s framework extends this: what the broader ecosystem says about you (reviews, creator endorsements, community sentiment, cited research) shapes AI representation even more than what you say about yourself.

As Birkett observes: “A significant portion of influence in AI answers comes from off-page sources, even for branded queries about a specific product.”

Your own content is necessary. It is not sufficient.

Credibility hierarchies are governance infrastructure

Birkett frames the trajectory explicitly: “These platforms will build credibility hierarchies that favor hard-to-fake signals by design.”

This is a governance statement, whether or not he intends it as one. AI search platforms are constructing systems that filter information based on trust signals rather than format compliance. They are building, in effect, credibility governance layers.

Google did this over twenty years. Link schemes became detectable. Content farms became penalizable. Each round of detection pushed the advantage further toward quality and genuine authority signals.

AI search will follow the same arc, faster. The models improve more rapidly than Google’s algorithm ever did. The window for gaming first-wave tactics is shorter. The convergence toward hard-to-fake signals is steeper.

Birkett’s heuristic cuts through the noise: “If you take away AI search, is it still worth doing? For these plays, the answer is a resounding yes.”

Verified reviews are worth having without AI search. Creator partnerships are worth building without AI search. Community reputation is worth earning without AI search. Original research is worth conducting without AI search. These investments compound regardless of which platform surfaces them.

What this means for governance teams

Organizations that treat AI visibility as a marketing optimization problem will keep chasing formatting tactics that decay within weeks. Organizations that treat it as a governance problem will invest in the trust infrastructure that compounds.

Three practical implications:

Audit your trust signal portfolio. Map where your organization stands on each of Birkett’s four signals. How many verified reviews exist across platforms AI models cite? Which creators in your space have AI citation authority? What is your community sentiment? What original research have you published? This audit reveals your credibility position, not just your content position.

Govern the signals, not just the content. Review processes exist for marketing claims on websites. They rarely exist for the structured data, community presence, and third-party representations that AI models actually weight. Extend governance disciplines to cover the full surface area of how AI forms opinions about your organization.

Invest in compounding assets. A thousand authentic G2 reviews built over three years cannot be matched in three months. A creator network with established AI citation authority cannot be assembled overnight. Community reputation built through years of genuine participation cannot be purchased. These are moats. First movers hold them. Late movers compete for scraps.

Birkett calls this the Matthew effect applied to AI search: those who have authority get cited more, which builds more authority, which gets them cited more. The rich get richer. The window to build these compounding assets is open. It will not stay open.

The firms that understand this will stop asking “how do we optimize content for AI?” and start asking “how do we govern the trust signals that AI platforms are building their credibility hierarchies around?”

That second question is harder. It is also the only one worth answering.


This analysis synthesizes Alex Birkett’s Where the Puck Is Going in AI Search (April 2026), NoGood’s platform coupling research (March 2026), and the Zahavi handicap principle (1975).

Victorino Group helps organizations govern AI trust signals from verified reviews to creator networks to community presence. Let’s talk.

All articles on The Thinking Wire are written with the assistance of Anthropic's Opus LLM. Each piece goes through multi-agent research to verify facts and surface contradictions, followed by human review and approval before publication. If you find any inaccurate information or wish to contact our editorial team, please reach out at editorial@victorinollc.com . About The Thinking Wire →

If this resonates, let's talk

We help companies implement AI without losing control.

Schedule a Conversation