- Home
- The Thinking Wire
- 164 Million Purchases Exposed AI Traffic's Conversion Problem
164 Million Purchases Exposed AI Traffic's Conversion Problem
A team at the University of Hamburg and Frankfurt School tracked 164,875,690 online purchases across 973 ecommerce sites in 49 countries. Total sales volume: $20 billion. Their question was simple. How does AI-referred traffic actually perform compared to organic search, affiliate, and other established channels?
The answer is uncomfortable for anyone building strategy around AI search hype.
AI referrals account for roughly 0.2% of total visits. And the visitors who arrive through ChatGPT or similar tools convert at rates 11.5% lower than organic search and 46.2% lower than affiliate traffic.
That is the headline. Here is what it means.
Small channel, weaker performance
The study, led by Maximilian Kaiser and Christian Schulze, is a pre-peer-review paper from March 2026. It has not passed academic validation yet, which matters. But the dataset is enormous. Nearly 165 million purchases across almost a thousand sites in 49 countries is not a sample you dismiss easily.
The 0.2% traffic share tells you something important about where AI search actually sits today. It is not replacing organic search. It is not threatening affiliate channels. It is a rounding error in the traffic mix for most ecommerce businesses.
The conversion lag tells you something different. People arriving through AI referrals behave differently than people arriving through search or affiliate links. They are browsing, not buying. They are exploring, not deciding. The researchers’ framing is worth quoting: “AI-driven traffic is one of the channels in your mix. Consider it part of your middle funnel.”
Middle funnel. Not top-of-funnel discovery. Not bottom-of-funnel conversion. The messy middle where intent is forming but commitment has not arrived.
Where AI traffic concentrates
The distribution is uneven, and the unevenness is informative.
Complex product categories attract disproportionate AI traffic. Healthcare sites and vehicle sites receive 4.6 times more ChatGPT-referred visitors than average. This makes intuitive sense. Nobody asks ChatGPT where to buy paper towels. People ask ChatGPT to help them compare health insurance plans or evaluate vehicle specifications.
The demographic skew reinforces this. Younger audiences generate 5.5 times higher AI referral rates. Tech-savvy segments generate 3.8 times higher rates. The early adopters of AI search are exactly who you would expect: young, technical, comfortable with new tools, and apparently less likely to convert when they arrive.
That combination (high consideration products, younger demographics, lower conversion) paints a consistent picture. AI search users are researching, not purchasing. They are gathering information to make decisions later, through other channels, with other signals.
The evidence problem in marketing governance
This study matters less for its specific numbers and more for what it reveals about how marketing organizations are making AI search decisions.
Most of the AI search conversation is built on anecdotes, vendor demos, and projection. A CMO sees ChatGPT mention a competitor. A vendor shows a case study where AI-optimized content increased mentions by 40%. A conference speaker declares that AI search will capture 30% of queries by 2027.
None of these are evidence. They are narratives dressed as strategy.
Kaiser and Schulze’s dataset is evidence. Imperfect evidence, pre-peer-review evidence, evidence from a period when AI search was younger than it is today (August 2024 through July 2025), but evidence nonetheless. And the evidence says: this channel is tiny and converts poorly.
That does not mean ignore it. It means calibrate investment to reality.
We have been building toward this calibration across several analyses. In our platform coupling research, we showed that AI citation patterns follow licensing deals, not content quality. In our AEO analysis, we showed that first-wave optimization tactics produce weak and decaying results. In our marketing agent governance piece, we showed that marketing tools shipping autonomous agents need governance infrastructure, not just capability.
This study adds the conversion layer. Even when AI search does send traffic, that traffic performs worse than channels marketing teams have spent decades optimizing.
Three caveats that matter
Intellectual honesty requires flagging the limitations.
First, this is pre-peer-review research. The methodology has not been validated by the academic community. Large dataset does not automatically mean sound methodology. Peer review exists for a reason.
Second, the data covers August 2024 through July 2025. AI search has evolved since then. ChatGPT’s browsing capabilities, Google’s AI Overviews, Perplexity’s product features have all changed. The conversion patterns may have shifted.
Third, the 0.2% traffic share is so small that conversion rate comparisons may lack statistical power in certain segments. When you are comparing conversion rates on a channel that represents two tenths of one percent of total traffic, small absolute numbers can produce noisy percentages.
These caveats do not invalidate the findings. They frame them. This is the best empirical evidence available on AI traffic performance at scale. It should inform strategy. It should not be treated as settled science.
What governance demands here
The pattern we keep encountering is the same. Marketing teams making significant resource allocation decisions about AI search based on narrative momentum rather than measured performance.
Governance is the antidote. Not governance as bureaucracy. Governance as the discipline of requiring evidence before committing resources.
Three questions every marketing organization should be able to answer with data, not opinions.
What percentage of your traffic comes from AI referrals? If you cannot answer this, you are allocating resources to a channel you cannot measure. Kaiser and Schulze found 0.2% across their dataset. Your number may be higher or lower. You need to know it.
How does AI-referred traffic convert compared to your other channels? Not “how do we think it converts” or “how should it convert in theory.” How does it actually convert, measured against organic search, paid search, affiliate, direct, email, and social? If the answer is “we do not track this,” that is a governance failure.
What is the appropriate investment level given measured performance? This is where most organizations fail. They invest based on perceived strategic importance rather than measured returns. AI search may be strategically important for the future. That does not mean it deserves the same investment as channels that are measurably important right now.
The middle-funnel reframe
The researchers’ middle-funnel framing is the most useful takeaway for practitioners.
If AI search traffic behaves like middle funnel, then treat it like middle funnel. That means different content, different measurement, and different expectations.
Middle-funnel content educates rather than converts. It builds familiarity rather than driving immediate action. The success metric is not conversion rate. It is whether AI-referred visitors return through another channel and convert later.
Most marketing attribution models are not built to measure this. They credit the last touch or distribute credit across touches. They are not designed to ask: “Did this visitor first encounter us through an AI referral, leave without converting, and return through organic search two weeks later?”
Building that measurement capability is a governance investment. It is also the only way to know whether AI search traffic is genuinely underperforming or whether it is performing a different function that your measurement system cannot see.
Evidence over narrative
The AI search conversation has been dominated by two camps. Enthusiasts who see it as the next SEO revolution. Skeptics who see it as hype. Both positions are unfalsifiable because neither camp is working from evidence.
Kaiser and Schulze’s study gives us evidence. The evidence says: AI search is a small, low-converting channel that concentrates in complex product categories among younger, tech-savvy demographics. It behaves like middle funnel. It is worth monitoring and measuring. It is not worth the resource allocation that narrative momentum suggests.
The organizations that will get this right are the ones that measure before they invest, calibrate investment to measured performance, and build the attribution infrastructure to detect whether the picture changes.
That is not exciting. It is not a conference keynote. It is governance. And governance is how you avoid spending real money chasing imaginary returns.
This analysis synthesizes Maximilian Kaiser and Christian Schulze’s AI-Driven Traffic Study (pre-peer-review, March 2026), presented via Science Says (April 2026), alongside Victorino’s prior analyses on platform coupling (April 2026) and AEO governance (April 2026).
Victorino Group helps marketing organizations build evidence-based AI governance, not hype-based AI strategy. Let’s talk.
All articles on The Thinking Wire are written with the assistance of Anthropic's Opus LLM. Each piece goes through multi-agent research to verify facts and surface contradictions, followed by human review and approval before publication. If you find any inaccurate information or wish to contact our editorial team, please reach out at editorial@victorinollc.com . About The Thinking Wire →
If this resonates, let's talk
We help companies implement AI without losing control.
Schedule a Conversation