- Home
- The Thinking Wire
- Marketing's Three 2026 Questions Just Got Their First Real Data
Marketing's Three 2026 Questions Just Got Their First Real Data
Stanford says the 22-25 cohort lost 16% of its employment after ChatGPT. BCG says marketing managers’ tasks are 90% disrupted. The marketing question is now structural.
For most of 2026, marketing leaders have been asking three questions and getting back hand-waving. Where does the entry-level pipeline come from when AI does the entry-level work? Where do customers find us when they no longer use search the way we built our funnels around? What is the defensible asset in a function where capability is being commoditized monthly?
This week, all three got their first real data.
Workforce: the entry-level pipeline is breaking
Stanford’s Digital Economy Lab released the most concrete employment evidence yet on the AI transition. Workers aged 22-25 lost 16% of their employment relative to older cohorts after the ChatGPT moment, as reported by MarTech citing the Stanford research. The US economy added only 181K jobs in 2025, substantially below 2024.
The marketing-specific overlay is sharper. BCG concluded that a marketing manager’s tasks are “90% disrupted from a skill perspective.” Paul Roetzer’s gloss in the same MarTech piece is the line worth quoting back to your team: “If you were just executing tasks rather than building deep skills, you’re cooked.”
This is not a downturn. Downturns reverse. What is happening to the entry tier of marketing — copywriters who were learning the craft by writing landing pages, junior strategists who were learning segmentation by drafting basic campaigns, content coordinators who were learning quality by editing other juniors — is structural. Those rungs are being removed from the ladder while the ladder is still in use.
Senior marketers will be fine for a few more years. The pipeline that produces senior marketers in 2031 will not be. We argued the role-shaped version of this gap in The Most Valuable Hire You’re Not Making. The data point we did not have until this week is the magnitude. Sixteen percent is not noise.
Visibility: the surface where customers find you is being rebuilt
While the workforce data was landing, Botify documented something quieter and arguably bigger. OpenAI tripled its web crawl since GPT-5 launched in August 2025. OAI-SearchBot grew 3.5x. GPTBot grew 2.9x. The OAI-SearchBot to GPTBot ratio flipped from 0.95 to 1.14 — meaning real-time search now outpaces training-data harvesting as OpenAI’s primary source.
Read that ratio shift twice. It changes what “AI visibility” means as a strategy. A year ago the question was “is my content in the training set.” Today the question is closer to “is my content reachable by the search bot at query time.” Different infrastructure. Different SLAs. Different content cadence.
The vertical numbers are starker. Healthcare crawl is up 741%. Media and publishing up 702%. OpenAI’s total share of crawl volume relative to Google is still small — about 4% — but it grew from 1.38%. That is the slope, not the level, that matters. And one direct-traffic counter-signal: visits coming from the ChatGPT-User browsing agent dropped 28% since December 2025. Customers are increasingly getting their answer inside the conversation, not by clicking out.
We covered the structural form of this in Engineering Has Cloudflare. Marketing Has Nothing.. Botify’s data is the first hard confirmation that the surface is being rebuilt under marketing’s feet, vertical by vertical, on a measurable curve. There is no single AI visibility strategy anymore. Healthcare needs real-time indexing. Publishing needs training-data inclusion plus real-time. B2B SaaS needs both, weighted to the search bot. The brands still optimizing as if it is one channel will lose to the brands that segment by crawler intent.
Knowledge: the only asset still hard to copy
The third data point came from an unexpected place — a B2B sales operations write-up — but it is the most useful for marketing leaders. Stage 2 Capital’s GTM publication walked through how MedScout built an AI account qualifier that actually worked.
The trick was not the model. It was what they did before they wrote a single prompt.
They took their best account executive, hooked him up to the projector, hit record on Fathom, and asked him to walk through account evaluation out loud in real time. They mapped which tabs he opened, in what order. Where he paused. What made him say “this one is interesting” versus “this is a no.” They did this for hours. The output was not a script. It was a documented map of tacit judgment — the actual decision pattern that had lived only in his head.
Then they built the qualifier on top of that map.
The pattern matters because every previous attempt at AI account qualifiers — and every adjacent attempt at AI lead scoring, AI ICP detection, AI content quality scoring, AI brand-voice enforcement — has failed in the same way. Teams build them on firmographic filters and surface signals because those are the inputs that are easy to write down. The actual judgment that separates good from great in those decisions is tacit. It lives in the heads of the senior people who can already do it. If you do not capture it first, your AI inherits the surface filters and misses the judgment.
The governance principle is uncomfortable for most marketing orgs: AI effectiveness depends on first institutionalizing human expertise as documented judgment. Capture before automation. The teams that win the next two years will not be the ones with the best models. They will be the ones who recorded, transcribed, and structured what their best people actually do.
The synthesis
Three questions. Three data points. One pattern.
The workforce question is structural because the rungs are being removed faster than the senior tier can absorb the work. The visibility question is structural because the customer-discovery surface is being rebuilt by an LLM crawler, not adjusted by an algorithm change. The knowledge question is structural because tacit judgment is the last asset AI cannot easily replicate, and most organizations have never written it down.
Each of these would have been a meaningful signal on its own. Landing in the same week, they describe a function that needs to do three things at once: rethink how it grows the next generation of talent, rethink how it gets found, and rethink what it actually owns that is hard to copy.
The leaders who treat 2026 as a cyclical year will be writing different headcount and SEO plans next April. The leaders who treat it as structural will be building something different — a workforce model that includes AI from day one, a visibility strategy that segments by crawler, and a deliberate practice of capturing tacit expertise before automating around it.
Same data. Two very different decade-shaping bets.
This analysis synthesizes MarTech’s AI’s Impact on Early-Career Marketers (April 2026, citing Stanford Digital Economy Lab and BCG), Botify’s OpenAI Tripled Web Crawl (April 2026), and Stage 2 Capital GTM’s How to Build an AI Account Qualifier (April 2026).
Victorino Group helps marketing and revenue leaders build the governance and instrumentation that engineering already has. Let’s talk.
All articles on The Thinking Wire are written with the assistance of Anthropic's Opus LLM. Each piece goes through multi-agent research to verify facts and surface contradictions, followed by human review and approval before publication. If you find any inaccurate information or wish to contact our editorial team, please reach out at editorial@victorinollc.com . About The Thinking Wire →
If this resonates, let's talk
We help companies implement AI without losing control.
Schedule a Conversation