- Home
- The Thinking Wire
- You Can Adopt AI Individually and Still Not Learn as a Company.
You Can Adopt AI Individually and Still Not Learn as a Company.
In one week, three writers from three different vantage points described the same failure inside companies that look like they are winning at AI.
Robert Glaser, writing on his personal blog, framed it as a learning problem. Wildfire Labs framed it as a leadership problem. The TLDR Founders newsletter framed it as a sequencing problem. The diagnosis underneath is the same. Individuals are getting faster, smarter, and more capable with AI. The companies they work for are not.
The dashboards say otherwise. Token usage is climbing. Seat licenses are expanding. A surprising number of leaders, when asked about AI maturity, point at adoption rates that would have looked impossible eighteen months ago. And then somewhere quieter in the same conversation, those same leaders admit the second-order economics are not moving. Cycle times are not shorter. Decision quality has not improved. Customer-facing work is roughly where it was last year.
That is the gap the three essays converge on, from different angles.
Glaser: Telemetry of the Wrong Thing
Robert Glaser’s piece, “When Everyone Has AI and the Company Still Learns Nothing,” makes a precise argument. Most of the AI dashboards in use today measure consumption. Tokens spent. Prompts run. Active users. None of these tell you whether the organization got smarter.
His proposal is to swap consumption telemetry for decision-quality metrics. Did the team make a faster call? A better-supported call? A reversible one when it should have been reversible? Did individual AI use produce a writeup, a runbook, a checklist, or a calibration that anyone else can pick up next week?
This is not pedantry. The shape of what you measure determines the shape of what you build. A company that measures token usage builds heavier individual capability. A company that measures decision quality builds shared capability, because decisions are organizational artifacts even when individuals make them.
The reason most companies have not made this shift is the same reason they bought the seats in the first place. Seat counts are easy to procure, easy to demo to a board, and easy to declare a win on. Decision quality requires you to be honest about the decisions you have been making and the ones you have been avoiding.
TLDR Founders: Two Paths That Look Identical for a Long Time
The TLDR Founders piece “The Long Becoming” reaches for an analogy most operators recognize: cloud-native versus cloud-enabled.
For roughly a decade, the two looked indistinguishable from the outside. Both companies had AWS bills. Both said “we run in the cloud.” The difference was structural and showed up only when scale or pressure arrived. Cloud-native companies had reorganized their engineering, deployment, observability, and cost discipline around the substrate. Cloud-enabled companies had moved their existing patterns onto a rented box.
The same divergence is starting now with AI, the piece argues, and the path is sequential. Adoption is the first bottleneck. Without it, nothing else activates. But once adoption stops being the limit, the next bottleneck is not “more adoption.” It is whether what individuals discovered last week becomes a workflow this week. After that, whether that workflow becomes a capability the company can hire against, audit, and improve.
AI-native and AI-enabled organizations look identical for a long time. Both have ChatGPT licenses, Cursor seats, Copilot rollouts, internal Slack bots. The divergence shows up later, and by the time it does, the gap is structural and expensive to close.
Wildfire Labs: This Is a Leadership Problem
Wildfire Labs is the bluntest of the three. The post is titled “Your Team Isn’t Using AI. Here’s Why That’s Your Fault.”
Their argument compresses to one sentence: no amount of strategy can fix a lack of experience. Teams that are not using AI are not failing because they lack a roadmap. They are failing because they have not lived inside the tools long enough to know what the tools can and cannot do. Strategy without exposure is theatre.
The recommendation is concrete and uncomfortable. Give teams projects with tight deadlines that force AI into the path. Not optional, not exploratory, not “feel free to experiment.” A real deliverable, a real clock, and the explicit expectation that AI is part of how the work gets done. Then debrief publicly on what worked, what did not, and what the team now believes that it did not believe last month.
This is the leadership posture that is missing in most AI rollouts. Companies bought the tools, sent the announcement, and waited for the org to figure it out. The org did not figure it out, because no one was on the hook for figuring it out.
Where the Three Stories Meet
Glaser, TLDR Founders, and Wildfire Labs are describing the same machine from three sides.
Glaser says the measurement is wrong: you cannot manage what you do not see, and most leaders are watching the wrong gauge. TLDR Founders says the sequence is wrong: adoption is a milestone, not a destination, and the next bottleneck has a different shape. Wildfire Labs says the accountability is wrong: experience does not arrive on its own, leaders have to put their teams in situations where AI exposure is forced rather than optional.
Three independent voices, no coordination, same week. When that happens, the surface has shifted under the conversation and most operators have not caught up yet.
The Capture Metric
Here is the diagnostic question we now ask CEOs in our review work. It is short and uncomfortable, which is why it works.
What percentage of the informal AI discoveries from the last 30 days are now codified as workflows, playbooks, or shared prompts that any new hire could pick up?
If the answer is high (say, 30% or more), individual experience is becoming organizational memory. The company is compounding. Adoption has translated into capability. The dashboards probably look reasonable too, but they are no longer the lead indicator.
If the answer is near zero (and for most companies it is, even ones with very high seat utilization), then individuals are getting better and the company is learning nothing. Every person who leaves takes their AI fluency with them. Every new person starts from scratch. The company is paying for adoption and getting personal productivity, which is not the same product.
We call this the capture metric. It sits next to adoption, not in place of it. Adoption tells you whether people are using the tools. Capture tells you whether the organization is learning from that use.
A few patterns we see in companies that score well on capture:
A weekly or biweekly ritual where someone presents an AI workflow they invented and the team decides whether to adopt, adapt, or archive it. Time-boxed. Named owner. Output is a documented artifact, not a vibe.
A defined home for AI-derived workflows. A folder, a Notion space, a prompt library, a skills registry. Something that survives turnover. Something a new hire is pointed at on day one.
A leader who has personally lived inside the tools recently enough to know what good looks like. This is the Wildfire Labs point. You cannot lead the capture ritual if you have not done the discovery yourself.
A measurement cadence that includes decision-quality artifacts, not only consumption metrics. This is the Glaser point. If your monthly review includes “tokens spent” but not “decisions improved,” you are looking at the wrong number.
Do This Now
If you run a company or a function and you are reading this with the suspicion that you are in the silent middle category, do one thing this week before you do anything else.
Ask three of your strongest individual AI users to spend ninety minutes documenting the three workflows they currently run that no one else in the company knows how to run. Have them write the prompt, the inputs, the failure modes, and the calibration. Then put one calendar slot on the books for next week to decide which of those workflows becomes a shared playbook, which gets adopted by another team this month, and which gets retired because it does not generalize.
That single exercise, repeated monthly, is the entire capture loop. It is not a tool purchase. It is not a strategy offsite. It is ninety minutes plus one decision, and it is the difference between a company that is adopting AI and a company that is learning from it.
The three essays this week were not coordinated. They are converging on the same point because the surface is moving and the dashboards have not caught up. The companies that will look durable in eighteen months are the ones who, this month, stopped asking “what is our adoption rate” and started asking “what did we capture.”
This analysis synthesizes When Everyone Has AI and the Company Still Learns Nothing (Robert Glaser, May 2026), The Long Becoming (TLDR Founders, May 2026), and Your Team Isn’t Using AI. Here’s Why That’s Your Fault (Wildfire Labs, May 2026).
Victorino Group helps CEOs replace adoption dashboards with capture metrics that turn individual AI use into compounding org capability. Let’s talk.
All articles on The Thinking Wire are written with the assistance of Anthropic's Opus LLM. Each piece goes through multi-agent research to verify facts and surface contradictions, followed by human review and approval before publication. If you find any inaccurate information or wish to contact our editorial team, please reach out at editorial@victorinollc.com . About The Thinking Wire →
If this resonates, let's talk
We help companies implement AI without losing control.
Schedule a Conversation