- Home
- The Thinking Wire
- AI Eliminated the Friction That Built Your Best Teams
In 2012, before “AI productivity” was a phrase anyone used, MIT’s Human Dynamics Lab put electronic badges on workers and measured something nobody was paying attention to: how often colleagues interrupted each other.
The teams with the most informal interaction — the most quick questions, the most hallway clarifications, the most “hey, can I bug you for a second?” moments — produced 35% more successful outcomes than the teams with the least.
The teams with the cleanest workflow lost.
Three years later, Google’s Project Aristotle finished its multi-year study of what makes its own teams effective. The single best predictor was not skill, seniority, structure, or even talent density. It was psychological safety — the felt permission to interrupt, ask, push back, expose ignorance.
Both findings predate the AI era by more than a decade. Both describe the connective tissue AI has been most efficient at removing.
The Smashing Magazine analysis by Casey Hudetz and Eric Olive (April 2026) names this directly: AI tools that promise to eliminate friction are eliminating the productive friction that built every high-performing team in the research record. And in 2025, the bill came due.
The 2025 Replication
Researchers from Harvard, Columbia, and Yeshiva published a study examining what happens when AI automation enters team workflows. The findings:
- AI automation “decreased overall team performance.”
- AI automation “decreased team trust.”
- The damage was worst in low- and medium-skilled teams — exactly the populations vendors target with “AI levels the playing field” pitches.
This is not a vibes critique of AI. This is a peer-reviewed replication of what MIT and Google had already established about informal interaction, run inside the new operating model. Same mechanism, same result, opposite direction.
When teammates stop bugging each other, performance and trust both fall. Whether the reason is open-plan office anxiety in 2012 or Slack-replacing-AI-assistants in 2025, the curve is the same shape.
What “Frictionless” Actually Removes
The pitch for AI assistants in collaborative work is almost always framed as friction reduction. Don’t bother your colleague — ask the AI. Don’t schedule a meeting — let the agent summarize. Don’t interrupt the senior engineer — let the copilot answer.
Each of these substitutions sounds like efficiency. Each of them removes a load-bearing micro-interaction.
The MIT badges captured what those interactions actually do. A junior asking a senior a quick question is not just information transfer. It is:
- A trust-building micro-event between two humans.
- A signal to the senior about where the junior is stuck (career-development data).
- A signal to the junior about how the senior thinks (modeling data).
- A reinforcement that asking is safe (psychological safety capital).
- A small, unscripted exposure of context that no documentation captures.
When the junior asks the AI instead, the information transfers. None of the other five things happen. The interaction looks identical from a task-completion standpoint and is structurally hollow from a team-formation standpoint.
Multiply by every interaction across a quarter, and you have an organization where tasks get done and trust silently drains.
The Cost Side Nobody Models
Hudetz and Olive cite McKinsey data showing the median S&P 500 company carries $228M to $355M in annual attrition cost. They also cite survey data showing 34% of workers experiencing AI-related “brain fry” intend to quit.
Translate that. The same tools sold as productivity-positive are operating, in measurable ways, as retention-negative. And retention is one of the most expensive variables on a public-company income statement.
If your AI rollout is improving per-task throughput by 15% and quietly raising voluntary attrition by 5 percentage points, the math is not even close. The throughput gain is reversible the moment the next vendor releases a faster model. The lost institutional knowledge from senior departures is not.
This is the cross-domain governance arc that engineering leaders have been working through for two years now — reverification, accountability, ownership infrastructure — landing on the People Ops desk for the first time. With peer-reviewed data attached.
What Engineering Already Learned
Engineering teams that adopted AI early hit this wall first. The pattern was: ship faster, defects slip in, accountability diffuses, on-call gets worse, senior engineers either burn out or leave. The mature response was not to ban the tools. It was to build governance around the tools — explicit ownership, reverification gates, structured human-in-the-loop checkpoints.
People Ops is now where engineering was in 2024.
The artifacts AI removes from People Ops workflows are softer and harder to instrument: a Slack DM that becomes an AI prompt, a one-on-one that becomes an AI summary, a hallway conversation that never happens because the answer was already retrieved. But the structural problem is identical. Output is preserved. The substrate that produces output over time is being depleted.
The correct response is the same response engineering arrived at: govern the tool, don’t ban it. Decide explicitly what AI is allowed to remove and what must be preserved as human-to-human surface area.
Do This Now
Institutionalize productive friction. Identify the informal interactions that build trust, modeling, and psychological safety in your organization — pairing, office hours, structured peer review, “ask me anything” channels. These are not legacy practices to optimize away. They are the substrate. Protect them with the same seriousness you protect uptime.
Route AI toward toil, not collaboration. AI is excellent at the work nobody wanted to do anyway: status compilation, scheduling, document drafting, log triage. AI is structurally bad at the work that builds teams: a senior unblocking a junior, a peer challenging a peer, a manager reading a room. Be explicit about which side of that line each AI deployment is on. Reverse any deployment that crossed the line by accident.
Measure psychological safety as a leading indicator. Attrition is a lagging signal. By the time it shows up in your dashboard, the senior engineers are already gone. Run psychological safety pulse measurements quarterly. Track informal-interaction frequency where you can (Slack reply latency between specific people, calendar density of unstructured one-on-ones). Flag downward trends as governance issues, not HR issues.
The teams that produced 35% better outcomes in 2012 were not better because they had better tools. They were better because they bugged each other. The technology that lets us stop bugging each other is also the technology that lets the next generation of senior engineers and operators never form. That is the trade. Govern it deliberately.
Sources
- Casey Hudetz and Eric Olive, “Bug-Free Workforce: AI Disrupting Teams”, Smashing Magazine, April 2026.
- MIT Human Dynamics Lab badge study (2012) and Google Project Aristotle (2015), as cited in Hudetz & Olive.
- Harvard / Columbia / Yeshiva University team-AI study (2025), as cited in Hudetz & Olive.
- McKinsey S&P 500 attrition cost data and “AI brain fry” intent-to-quit survey, as cited in Hudetz & Olive.
Related: AI deletes accountability · AI disempowerment patterns · The AI-native org ROI gap · Governance leaving the engineering silo
Victorino Group helps People Ops and engineering leaders build cross-domain governance for human-AI teams. Let’s talk.
All articles on The Thinking Wire are written with the assistance of Anthropic's Opus LLM. Each piece goes through multi-agent research to verify facts and surface contradictions, followed by human review and approval before publication. If you find any inaccurate information or wish to contact our editorial team, please reach out at editorial@victorinollc.com . About The Thinking Wire →
If this resonates, let's talk
We help companies implement AI without losing control.
Schedule a Conversation