The AI Control Problem

80,508 People Told Anthropic What They Want From AI. 14.7% Said Governance.

TV
Thiago Victorino
10 min read
80,508 People Told Anthropic What They Want From AI. 14.7% Said Governance.

Anthropic just published the largest qualitative study on AI ever conducted. 80,508 participants. 159 countries. 70 languages. The interviews happened in December 2025, and the results landed in March 2026.

The topline number is the one everyone will cite: 67% net positive sentiment toward AI. Anthropic will use that in its marketing. Skeptics will point to selection bias. Both sides will miss the finding that actually matters.

Benefits and harms are not opposing camps. They coexist within the same individuals.

The Study Nobody Else Could Run

Let’s start with what this is and what it is not. Anthropic interviewed its own users about their experience with AI. The sample is self-selecting: these are people who already use Claude, agreed to participate, and cared enough to respond. That 67% positive sentiment figure needs an asterisk the size of Montana.

Regional sample sizes are not disclosed. The “159 countries” headline sounds impressive until you consider that a single response from Liechtenstein technically counts. Saffron Huang and the 20+ researchers who led the project designed a rigorous qualitative framework, but the sample frame limits what you can generalize.

None of that makes the findings useless. It makes them specific. This is the largest dataset we have on how active AI users experience AI. Within that frame, the patterns are revealing.

The Numbers Worth Reading

The study surfaces five categories of desire and four categories of concern. The desires tell the expected story: professional excellence (18.8%) and productivity lead. Productivity was the most realized aspiration, with 32.0% of respondents reporting that AI helped them achieve it.

The concerns are more interesting.

Unreliability and hallucinations top the list at 26.7%. Job displacement follows at 22.3%. Loss of autonomy sits at 21.9%. And governance concerns land fourth, at 14.7%.

That 14.7% is remarkable. Not because it is large. Because it exists at all.

These are not policymakers. They are not compliance officers or governance consultants. They are ordinary users, people asking Claude to help with homework, draft emails, debug code. Nearly one in seven spontaneously identified governance as a concern without being prompted to think about it.

When we wrote about the trust deficit using Stack Overflow’s developer survey data, the pattern was clear: 84% of developers used AI, only 33% trusted it. That was 49,000 developers in a single profession. Anthropic’s study extends the same pattern across 80,000 people in every profession and every region. The scale changed. The signal did not.

Independent Workers: The Canary Population

The most important subgroup in the study is independent workers. Freelancers, solopreneurs, gig workers. They reported the highest economic benefit from AI: 47% said it helped them economically, compared to 14% of employees at established institutions.

They also reported the highest economic squeeze. The same population experiencing the most benefit is experiencing the most pressure. This is not a contradiction. It is the same mechanism viewed from two sides.

When you are independent, AI is your leverage. It lets one person do the work of three. But it also lets your competitor do the same. The productivity advantage evaporates as adoption spreads, and what was leverage becomes table stakes. You run faster to stay in place.

This maps directly to the workforce reckoning we documented. Block cut 40% of staff. Klarna cut aggressively, then admitted they went too far. The Anthropic data shows the same pressure from below, not from executive decisions but from market dynamics compressing independent workers.

The 22.3% who fear job displacement are not paranoid. One respondent, a technical support worker in the United States, put it plainly: “I got laid off in May because my company wanted to replace me.” That is biography, not speculation.

Lawyers and the Reliability Problem

Among professional subgroups, lawyers stand out. 48% of lawyers in the study reported experiencing unreliability firsthand. Nearly half.

This matters because legal work has the lowest tolerance for error. A hallucinated case citation is malpractice, full stop. When half the lawyers using AI encounter reliability failures, the technology is not ready for unstructured legal work without verification layers.

The broader unreliability number, 26.7% citing it as a top concern, understates the problem. That is the percentage who named it as a concern. The percentage who experienced it is higher. And 18.9% of all respondents reported unmet expectations from AI, suggesting the distance between what AI promises and what it delivers remains wide.

As we explored in our analysis of AI disempowerment patterns, the risk is not just that AI gets things wrong. It is that AI gets things wrong confidently. One healthcare worker in the study described how “Claude put the historical pieces together, leading to my proper diagnosis.” That is a success story. But it is also a story about someone relying on AI for medical reasoning, and the distance between a correct synthesis and a confident hallucination is invisible to the user.

Five Tensions, One Insight

Anthropic’s researchers organized their findings into five “Light and Shade” tensions. Benefits and harms are not distributed across different populations. They show up in the same person, often from the same use case.

A software engineer in Mexico said: “I can now leave work on time to pick up my kids from school.” Productivity gain. Work-life improvement. That same productivity gain, scaled across the profession, creates the displacement pressure that 22.3% of respondents fear.

The five tensions:

  1. AI improves individual capability while creating collective displacement pressure.
  2. AI increases autonomy in task execution while reducing autonomy in career trajectory.
  3. AI democratizes access to expertise while devaluing the expertise itself.
  4. AI delivers measurable productivity while eroding unmeasurable skills.
  5. AI satisfies immediate needs while creating long-term dependencies.

This framework is elegant. It is also self-serving. Anthropic benefits from framing AI’s harms as inseparable from its benefits because that framing makes the case against restriction. If you cannot have the light without the shade, regulation looks futile. Convenient conclusion for the company selling the light.

But the framework is not wrong just because it is convenient. The data supports it. The coexistence of benefit and harm within individuals is the most useful finding in the study, because it invalidates the mental model most organizations use.

The Mental Model That Breaks

Most governance frameworks assume a clean separation. There are beneficial uses and harmful uses. You permit the former and restrict the latter. Risk assessments sort use cases into approved and unapproved categories. Policy language draws bright lines.

The Anthropic data suggests those lines do not exist in practice. The same use case that saves a worker two hours also atrophies the skill they used to do that work manually. The same tool that helps an independent consultant compete against a large firm also enables the large firm to eliminate the consultant’s contract.

Regional variation reinforces this. Western European respondents prioritized privacy concerns. East Asian respondents worried about cognitive atrophy. African and South Asian respondents focused on basic reliability. Same tool, different societies, different failure modes. A governance framework designed around Western European privacy concerns will miss the cognitive atrophy risk that East Asian respondents identified, and vice versa.

This is why the 14.7% governance concern number matters. Those respondents are not asking for more features or better models. They are asking for structures, rules, accountability, oversight. They are asking because they can see, from personal experience, that the benefits they enjoy coexist with harms they cannot individually control.

What This Means for Organizations

Three implications survive the selection bias critique.

First, internal AI surveys that ask “is AI helpful?” are measuring the wrong thing. The Anthropic data shows that 81% of respondents said AI advanced their stated vision. That sounds like overwhelming success. But it coexists with 26.7% citing unreliability, 22.3% fearing displacement, and 18.9% reporting unmet expectations. “Helpful” and “harmful” are not opposites. Measure both. Separately.

Second, governance cannot be layered on top of adoption. It must be woven into it. If benefits and harms come from the same use cases, you cannot govern by restricting use cases. You govern by building verification, measurement, and accountability into the use itself. That means quality gates, output review processes, and skills preservation programs running alongside the productivity tools.

Third, regional and role-specific variation means centralized AI policy will fail. A global company cannot write one AI acceptable use policy and expect it to work across legal (48% unreliability), independent contractors (47% benefit, highest squeeze), and knowledge workers (cognitive atrophy concern). The policy must be granular enough to address specific risk profiles.

The Anthropic Paradox

There is something worth noting about Anthropic publishing this research at all. The company built Claude. It surveyed Claude users. It found 67% positive sentiment. And then it published the full dataset, including the 14.7% governance concerns, the 22.3% displacement fears, and the unflattering quotes about layoffs.

You can read this cynically. Transparency is good PR. Publishing critical findings inoculates against the critique that you suppressed them. The “Light and Shade” framing makes harms look inevitable rather than addressable.

You can also read it straight. Anthropic hired Saffron Huang, who previously led work on deliberative alignment and democratic input, to run this study. The methodology is more rigorous than most industry research. The sample size dwarfs anything a government or academic institution has produced on this topic.

Both readings can be true. That is, after all, the study’s own thesis. Light and shade coexist.

What matters for organizations is not whether Anthropic’s motives are pure. What matters is that 80,508 data points confirm what smaller studies have been showing for two years: AI adoption without governance infrastructure creates compounding risk, and the people experiencing that risk can see it clearly, even when their organizations cannot.

The 14.7% who said governance was their concern did not need a consultant to tell them. They figured it out from lived experience. The question is whether your organization will figure it out the same way, or build the infrastructure before the lesson gets expensive.


This analysis draws on Anthropic’s Collective Intelligence study (March 2026), led by Saffron Huang with 80,508 participants across 159 countries. It builds on data from Stack Overflow’s 2025 Developer Survey, Oxford Economics’ analysis of AI-attributed layoffs (January 2026), and Anthropic’s disempowerment patterns research (January 2026).

Victorino Group helps organizations build AI governance that accounts for what this data shows: benefits and harms arrive together. Let’s talk.

If this resonates, let's talk

We help companies implement AI without losing control.

Schedule a Conversation