HR Has an AI Governance Surface Now. McKinsey Just Gave It Numbers.

TV
Thiago Victorino
8 min read
HR Has an AI Governance Surface Now. McKinsey Just Gave It Numbers.
Listen to this article

Do not call what McKinsey published this month a pivot. The firm has been writing about the people side of AI for years. Rewired (2023) made “unlocking user adoption” a dedicated section. Superagency in the Workplace (January 2025) argued openly that scaling AI is an organizational problem, not a data-science one. Women in the Workplace 2025 shipped the encouragement-gap data every trade outlet is now quoting. The people track has been running alongside the agent-factory track the whole time.

April 2026 is something different. It is a quantification moment.

In the same week, McKinsey’s People & Organizational Performance practice and its Technology practice pushed three separate newsletters. Drew Goldstein argues that manager encouragement is one of the strongest predictors of meaningful AI adoption. Basel Kayyali and Florian Niedermann describe how CIOs are redesigning technology workforces for agentic AI. The authors did not co-write. They did not cite each other. Two practices produced three complementary arguments in parallel. That parallelism is the signal.

The signal is that the people side of AI finally has numbers attached to it. Once you have numbers, you have something auditable.

What Actually Changed This Month

We audited McKinsey’s agent-factory prescription when it landed, and the 12-theme transformation manifesto when it followed. Both were tech-side prescriptions with a culture chapter tacked on. The culture chapter was always vague. “Activate leadership.” “Build readiness.” Vague is unauditable. Vague is how a topic stays outside the governance conversation.

This month the vague parts got quantified. McKinsey asserts (asserts, not demonstrates through a published regression) that manager encouragement is one of the strongest measurable predictors of whether employees actually use AI at work. Women in the Workplace 2025 provides the number that carries the claim: 21% of entry-level women reported being encouraged by their managers to use AI, compared with 33% of entry-level men. Roughly four in five entry-level women and two in three entry-level men work under managers who never told them to try. The same report shows that when employees are encouraged, they are more than 50% more likely to use AI. BCG and MIT Sloan’s 9th AI and Business Strategy study, independent of McKinsey, reports a 3.4x lift in regular use on teams where the manager leads by example.

Three separate data collections. Different populations. Different instruments. Same shape of result.

That is the quantification, not the discovery. Organizational behavior has known for decades that encouragement predicts adoption. What is new is that a partner-level McKinsey piece attached a specific predictor to a specific measurable gap in boardroom-legible format.

The Governance Claim, Narrowed

It is tempting to say “culture is governance now.” That overclaims. The honest version has a boundary. Culture has a measurable portion and an unmeasurable portion.

The measurable portion is policy, incentive design, training records, usage metrics, manager OKRs, and encouragement signals. All six produce timestamps, counts, and completion rates. All six can be instrumented inside an HRIS, a learning management system, a seat-activation report, or a pulse survey. Once you instrument something, you can audit it. Once you audit it, it belongs in the governance conversation.

The unmeasurable portion is trust, fear, psychological safety, and identity threat. No dashboard captures whether an employee privately believes AI will replace them. No OKR measures whether a manager’s encouragement is genuine or performed. These remain culture work, and no HR instrumentation will turn them into governance.

Our claim is narrow: the measurable portion of the people side of AI adoption is governance-shaped, most enterprises are not yet measuring it, and HR now owns the surface where those measurements live. That is what changed. Not the existence of the problem. The legibility of it.

Six Surfaces HR Can Instrument Today

SurfaceWhat to instrumentCadenceOwner
AI-use policyApproved tools, data-class restrictions, sanctioned use casesQuarterlyCISO + Legal
Incentive alignmentShare of OKRs referencing AI use; bonuses tied to AI outcomesAnnual + mid-yearHR + Finance
Training recordsCompletion by role, competency scores, recency of last trainingMonthlyHR + L&D
Usage metricsSeat activation, active-user rate, queries per role, team adoption curvesWeeklyIT + Analytics
Manager OKRs”My team’s AI use” as a scorecard item; attested encouragement frequencyQuarterlyHR + managers
Encouragement signalsPulse surveys and 360 feedback with AI-encouragement itemsQuarterlyPeople Analytics

Each row is a claim, an instrument, a cadence, and an owner. That is what makes something governance-shaped rather than culture-shaped. None of this is exotic. HRIS vendors already collect most of it. What most enterprises lack is the integration layer that treats these signals as a single surface rather than six disconnected reports.

The Ramp Contrast, Kept Honest

When Ramp disclosed 99.5% AI adoption at a $32B company, the interesting part was not the number. It was the L0–L3 proficiency ladder CPO Geoff Charles attached to it. Ramp is running exactly the instrumented people-side governance surface McKinsey now describes the gap for: public proficiency placements, recognition events tied to experimentation, a redeployment posture that reads AI as opportunity rather than threat.

The honest caveat is that Ramp is one tech-native company of roughly 1,500 people, and the 99.5% figure is self-reported. It is proof of possibility, not a scaling law. Women in the Workplace 2025 covers 124 organizations and roughly three million employees. Ramp describes where a small number of unusually aligned companies already operate. Both are useful if we do not inflate the comparison.

What the Tech Piece Adds

Kayyali and Niedermann’s companion piece on redesigning the end-to-end technology workforce carries the redeployment language that matters most here. Their anonymized example describes a technology organization where efficiency gains from AI were channeled into delivery-team expansion rather than headcount reduction. That is not a culture decision. It is a capital-allocation decision made at the board level, visible to the workforce, and measurable as a ratio: for every role where AI reduced demand, what percentage was redeployed versus cut?

That ratio is the most durable cultural signal an executive team can send about AI. It is also purely observable. You can read it in the workforce plan. The six-surface table is executable by HR. The redeployment ratio requires the board.

Disclosures the Argument Requires

Three of them, because honesty is cheap and skepticism is expensive.

One. On the same day Goldstein’s newsletter landed (2026-04-22), McKinsey and Google Cloud announced the McKinsey-Google Transformation Group, a joint venture selling enterprise AI transformation services. A narrative arguing that the missing layer is people, and that only a firm with deep organizational-behavior expertise can diagnose it, is commercially convenient for McKinsey’s highest-margin practice. The finding can be simultaneously true and revenue-aligned. Both are true here.

Two. Goldstein leads the Organizational Health Index, McKinsey’s people-measurement product. His argument that encouragement is a measurable predictor directly supports the product he sells. The finding can still be true; readers should know the incentive.

Three. The “two-thirds not encouraged” number circulating in trade coverage is a paraphrase. The precise verified number is 21% vs. 33% at entry level. A blanket “two-thirds of the workforce” claim is not supported.

We spent several paragraphs arguing that the 81,000-person governance gap at MGM and similar enterprises is a people-system problem at its core. McKinsey’s commercial position does not change that argument. It just means we read their framing with the skepticism we would bring to any vendor whose findings align with its product catalog.

What the Engineering-Governance Narrative Was Missing

The engineering-side governance conversation is mature. Code review, deployment gates, audit trails, policy-as-code, secrets management, model-risk management. A Fortune 500 CIO can describe their engineering-governance surface in an hour.

Ask the same CIO to describe their people-side AI-governance surface and you get a change-management program, a training budget, and a slide that says “cultural transformation.” That is not a surface. That is a slide.

What McKinsey’s April week provides, for the first time in boardroom-legible format, is the vocabulary to describe the people-side surface as surface. Manager encouragement cadence. Proficiency-ladder progression. Redeployment ratio. Training completion by role. Usage distribution across teams. Incentive alignment in OKRs. These are nouns executives can ask reports about. They are not cultural states. They are instrumented signals.

That is the piece the engineering-governance narrative has been missing. Not because people-side work did not exist, but because it did not have instruments a CFO would fund. It does now.


This analysis draws from McKinsey’s “Are your people ready for AI at scale?” by Drew Goldstein (March 2026), “Designing an end-to-end technology workforce for the AI-first era” by Basel Kayyali and Florian Niedermann (April 2026), the McKinsey State of AI 2025 (November 2025), Women in the Workplace 2025 (McKinsey and LeanIn.Org) (December 2025), and the McKinsey and Google Cloud Transformation Group announcement (April 2026).

Victorino Group helps teams turn manager behavior, training records, and encouragement signals into an auditable people-side governance surface. Let’s talk.

All articles on The Thinking Wire are written with the assistance of Anthropic's Opus LLM. Each piece goes through multi-agent research to verify facts and surface contradictions, followed by human review and approval before publication. If you find any inaccurate information or wish to contact our editorial team, please reach out at editorial@victorinollc.com . About The Thinking Wire →

If this resonates, let's talk

We help companies implement AI without losing control.

Schedule a Conversation