- Home
- The Thinking Wire
- Position One Is the New Page One: Inside Google AI Mode Shopping
Position One Is the New Page One: Inside Google AI Mode Shopping
Ask a consumer whether they fact-check what an AI tells them, and 93% will say yes. Salsify’s 2026 trust survey is unambiguous on that point. People believe they double-check. They want you to know they double-check.
Now watch them shop.
Kevin Indig’s Growth Memo published the first unmoderated behavioral study of Google AI Mode shopping this week, run with Citation Labs and Clickstream Solutions. Forty-eight participants. 185 screen-recorded tasks. High-stakes categories: TVs, laptops, washer-dryers, car insurance. The study measured what people actually did, not what they said they would do.
Only 5% of AI Mode users triangulated across independent sources.
That gap is the story. And it is not a story about AI. It is a story about what marketing teams now have to govern.
What the behavior shows
Three numbers carry the argument.
74% of final shortlists came directly from the AI’s output, with no items added or removed through external research. The model proposed a candidate set. Shoppers accepted the candidate set.
74% also picked the top-ranked item inside that set. Mean rank of the final selection: 1.35. Position one won, position two occasionally won, everything below that barely existed.
64% of AI Mode tasks ended with zero external clicks. Not a click to a review site. Not a click to a retailer. Nothing. By category the rate ran from 45% for washer-dryers to 73% for car insurance. The more complex the decision, the more people outsourced it.
One participant narrated the whole collapse in a single sentence: “Given that the first paragraph says Lenovo or Apple, going with that.”
The SEO-era heuristic of “the top three blue links own the decision” has compressed into a single generated sentence. Page one became paragraph one became sentence one.
The honest caveats
Forty-eight participants is a small panel. The traditional-search comparison arm is smaller still. 36 tasks against 149 in AI Mode, a 4:1 imbalance that gives any “23% vs 67% external visits” headline wide error bars. The findings are directional, not population-level.
There is a stronger caveat, and the critics flagged it hard: no external click is not the same as no verification. AI Mode shows summarized specs, prices, and snippets inside the answer surface. A user who stops clicking may still be comparing, just inside the model. Whether in-surface reading replaces the function of external triangulation is an open empirical question. This study cannot answer it.
The findings are also specific to Google AI Mode. They do not automatically transfer to ChatGPT, Perplexity, or Amazon Rufus, each of which frames sources differently. And the product categories studied (consumer electronics, appliances, insurance) carry strong pre-existing brand preferences that confound any claim about AI persuasion.
Treat the study as a behavioral signal. A loud one.
We already knew the damage. Now we know the scene.
Victorino has been tracking this phenomenon from the checkout side. 164 million purchases across 973 sites showed AI referrals converting 11.5% below organic. Walmart’s data on ChatGPT Instant Checkout put the gap at 66%. Both studies measured the wound. Neither explained the weapon.
The Growth Memo study explains the weapon. The decision was made before the click, inside the model, with the verification step skipped. By the time an AI referral hits a merchant site, the comparison has already happened upstream, in a surface the merchant does not control and cannot audit.
That reframes the marketing problem. The question is no longer “why does our AI traffic convert poorly?” It is “were we even in the candidate set?”
Exclusion is the new invisibility
Brand familiarity still matters inside AI Mode. 26% of participants overrode the AI’s ranking based on recognition. But 81% still chose from the AI’s candidate set. Brand recognition filtered within the model’s shortlist. It did not generate entries on that shortlist.
In the laptop category, three brands captured 93% of final choices. That is a winner-take-most dynamic with no SERP to audit, no ad auction to enter, and no second-page retreat. If your product is not surfaced in the generated paragraph, you are not in position four. You are not anywhere.
This is the mechanism that makes brand presence in model outputs a governance discipline rather than a visibility tactic. A SERP you can inspect. A model output varies prompt by prompt, session by session, category by category. You cannot optimize what you do not measure, and you cannot measure what no team owns.
Marketing governance, minus the buzzwords
We have written before that the response to agent-era marketing is not a new channel strategy. It is governance infrastructure. This study sharpens that claim.
Three questions every CMO should be able to answer with evidence:
What is our share of voice inside generated answers across the prompts that matter? Not our ranking in Google. Our ranking in the paragraph that increasingly replaces Google. If nobody owns that metric, nobody is accountable for the shortlist.
When we are cited, are the facts right? Price, specs, positioning, availability. Missing or malformed data reads as disqualification, not as absence. One participant in the study rejected a brand because “there’s not even a link there.” The model’s formatting failure became the brand’s credibility failure.
When we are not cited, why not? Exclusion from the candidate set has no appeals process. Diagnosing it (data quality, content structure, entity signals, licensing relationships) is cross-functional work that crosses marketing, product, legal, and data.
Vendors are starting to build into this gap. Mutiny, with $72M in cumulative funding, has rebuilt its positioning around agentic GTM: an agent that generates customer-facing assets from brand guardrails. Structural Content frames itself around job-ticket content operations for machine-readable outputs. Name them, notice them, do not treat them as solutions. The primitives are new. The governance is not.
The stated-vs-revealed gap is the lesson
Self-reported consumer research is now actively misleading. Ninety-three percent of shoppers will tell a survey they double-check what AI tells them. Five percent actually do, at least in the form that leaves a clickstream. The gap is large enough to break any strategy built on what people say.
Marketing governance, in the agent era, starts with a simple posture: trust the behavior, not the narrative. Measure what the model says about you. Measure what consumers do after it says it. Assume position one is the only position, until the data says otherwise.
Then get to work on being in position one.
This analysis synthesizes Kevin Indig / Growth Memo’s How Consumers Navigate High-Stakes Purchases in AI Mode (April 2026), Salsify’s 2026 AI Trust Gap Research (2026), the IBM-NRF 2026 Consumer Study (January 2026), and Victorino’s prior analyses on AI traffic conversion (April 2026), Walmart’s agentic commerce gap (March 2026), and marketing agent governance (2026).
Victorino Group helps CMOs and brand leaders build governance infrastructure for model-mediated discovery: measuring, managing, and defending brand presence inside generated answers. Let’s talk.
All articles on The Thinking Wire are written with the assistance of Anthropic's Opus LLM. Each piece goes through multi-agent research to verify facts and surface contradictions, followed by human review and approval before publication. If you find any inaccurate information or wish to contact our editorial team, please reach out at editorial@victorinollc.com . About The Thinking Wire →
If this resonates, let's talk
We help companies implement AI without losing control.
Schedule a Conversation