99.5% AI Adoption at a $32B Company. The Secret Wasn't the Technology.

TV
Thiago Victorino
8 min read
99.5% AI Adoption at a $32B Company. The Secret Wasn't the Technology.
Listen to this article

Geoff Charles, CPO at Ramp, posted a thread on X this week claiming 99.5% of the company is active on AI tools. Usage grew 6,300% year over year. Ramp is valued at $32 billion.

Three numbers. One conclusion that matters more than all of them: “This isn’t a tech story. It’s an org design story.”

He is right. And the distinction between what Ramp did and what most enterprises are doing explains why adoption programs keep failing.

The Proficiency Ladder

Ramp built an L0 through L3 proficiency framework:

L0: No meaningful AI usage. The starting line.

L1: Using AI tools for individual productivity. Summarization, drafting, search.

L2: Integrating AI into team workflows. Shared prompts, team-specific tools, process redesign.

L3: Building with AI. Custom tooling, internal products, novel applications that did not exist before.

This is not a competency score attached to performance reviews. It is a progression model that tells people where they are and what comes next. The difference is significant. A competency score measures. A progression model teaches.

Ramp did not stop at defining the levels. They measured publicly, showing teams where they fell on the ladder. Public measurement creates social proof. When your team sees that three other teams are at L2 and you are at L1, the incentive is not punitive. It is competitive. That is a very different motivational mechanism than a manager telling you to log more tool hours.

What Ramp Got Right (and Where the Asterisks Are)

The playbook Charles describes has four components: define levels, measure publicly, remove constraints, let people build.

“Remove constraints” deserves attention. Most enterprise AI programs add constraints. Approved vendor lists. Usage policies. Security reviews for every new tool. These constraints exist for good reasons. But they also create friction that kills experimentation. Ramp chose to reduce that friction and let people try things. For a fintech company handling sensitive financial data, this is a bold choice.

“Let people build” is where L3 happens. Ramp evolved from using Notion’s AI features to building their own internal tool called Claude Cowork. That evolution only occurs when people have permission and runway to move beyond consumption into creation.

Now the asterisks.

“99.5% active on AI tools” does not tell you what “active” means. Logging in once counts the same as building custom tooling daily. The 6,300% growth figure is dramatic, but growth percentages from a low base can be misleading. If five people used AI tools last year and 320 use them now, that is 6,300% growth. Impressive, but the percentage obscures the absolute numbers.

Ramp is also a tech-native workforce. Its employees skew young, technical, and comfortable with new tools. Generalizing from Ramp to a 50,000-person insurance company or a manufacturing conglomerate requires caution. The cultural starting conditions are different in ways that matter.

And Charles is the CPO presenting his own program’s results. This is marketing as much as it is reporting. Treat these numbers as “reported by Ramp’s CPO,” not as independently verified data.

The Contrast with Mandates

In The Governance of AI Adoption, we examined how Google, Amazon, and Salesforce enforce AI usage through performance reviews and dashboards. Those are stick-based approaches. Use the tool or face consequences.

In Meta’s Structural Bet, we documented an even more aggressive approach: rewriting job titles, restructuring teams, and encoding AI-first identity into the org chart itself.

Ramp represents a third path. No mandates. No identity rewrites. Instead: a clear progression model, public transparency, reduced friction, and space to build.

The results suggest this approach works better, at least at Ramp. But the comparison is imperfect. Meta has 78,000 employees across hardware, social media, and VR. Ramp has roughly 1,500 people in fintech. The organizational physics are different. What scales at 1,500 may collapse at 78,000. What works in a tech-native workforce may fail in a workforce with varied technical literacy.

Still, the contrast is instructive. Mandates produce compliance. Proficiency ladders produce capability. Compliance looks good on dashboards. Capability shows up in the product.

Why Org Design Is the Lever

Charles’s claim that adoption is an org design problem, not a technology problem, aligns with what Ably discovered when building their AI-first culture. Ably’s Jamie Newcomb said it plainly: “The biggest gains come from how people think, not tools.”

Three organizations. Three different approaches. The same conclusion.

The technology is available to everyone. Claude, GPT, Copilot, Gemini. Every company can buy the same tools. The difference is organizational: who has permission to experiment, how progress is measured, whether people are punished for slow adoption or rewarded for thoughtful adoption, and whether the org structure creates space for people to move from consumption to creation.

Ramp’s L0-L3 ladder is useful because it reframes AI adoption from a binary (using it or not) to a spectrum (how deeply are you integrating it). That reframing changes behavior. When the only question is “are you using AI,” the answer is always yes, because opening ChatGPT once a week counts. When the question is “are you at L1 or L2,” people start thinking about what L2 actually requires.

What Is Missing

Ramp’s playbook is compelling but incomplete. Three things are absent from Charles’s account.

Governance infrastructure. “Remove constraints” works when your team is 1,500 people you trust. It is reckless at enterprise scale without corresponding governance: who reviews what the AI produces, how errors are caught, what happens when an L3 builder creates something with unintended consequences. The freedom to build needs guardrails. Not guardrails that prevent building, but guardrails that catch failures before they reach production.

Quality metrics. Charles reports adoption and usage growth. He does not report outcomes. Did Ramp ship faster? Did error rates change? Did customer satisfaction move? Usage is an input metric. Without output metrics, you cannot distinguish productive adoption from busy adoption.

Durability. One year of 6,300% growth tells you about momentum. It tells you nothing about sustainability. What happens when the novelty fades? When the easy automation wins are captured and the remaining work is harder? The organizations that sustain AI adoption are the ones that build it into process, not just culture.

The Useful Signal

Strip away the marketing. Ignore the percentages. The useful signal from Ramp is structural.

Define what good looks like at each stage. Measure publicly. Reduce friction. Give people room to build. These four moves describe an org design intervention, not a technology deployment.

Most companies start with the technology. They buy licenses, roll out tools, and then wonder why adoption stalls. Ramp started with the organization: how people progress, how progress is visible, what barriers exist, and what building looks like. The tools came second.

That sequencing explains the adoption numbers more than any specific technology choice. And it is the part that other organizations can actually replicate, regardless of their size or technical sophistication.


This analysis is based on Geoff Charles’s X thread (April 2026), contextualized against the mandates framework in The Governance of AI Adoption (February 2026) and Meta’s structural approach documented in When AI Mandates Become Org Charts (March 2026).

Victorino Group helps organizations design AI adoption programs that produce capability, not compliance. Let’s talk.

All articles on The Thinking Wire are written with the assistance of Anthropic's Opus LLM. Each piece goes through multi-agent research to verify facts and surface contradictions, followed by human review and approval before publication. If you find any inaccurate information or wish to contact our editorial team, please reach out at editorial@victorinollc.com . About The Thinking Wire →

If this resonates, let's talk

We help companies implement AI without losing control.

Schedule a Conversation