- Home
- The Thinking Wire
- Just One More Prompt: The Dopamine Trap of AI-Assisted Work
Just One More Prompt: The Dopamine Trap of AI-Assisted Work
It was 2 a.m. on a Wednesday. I had told myself at 10 p.m. that I was done for the night. Then I thought of one more thing I wanted the model to try. Then one more. Then one more after that. Four hours later I was still at the keyboard, eyes burning, convinced I was on the edge of something. I was not on the edge of anything. I was in a loop.
I know the pattern now because I have lived it for months. And when I mention it to other people who work with AI daily, nobody is surprised. They nod. They describe the same thing. The phrase that keeps coming up: “It’s like I can’t stop.”
This is not a productivity article. We already wrote the organizational analysis. If you want the systems view of how AI intensifies work at the company level, read The AI Intensity Trap. This is the personal version. What is happening inside your head when you cannot close the laptop.
The BCG Data: You Are Not Imagining It
In March 2026, BCG published a study through Harvard Business Review based on a survey of 1,488 full-time US workers. They coined a term for what many of us have been feeling: “AI brain fry.” Mental fatigue from excessive AI use or oversight that exceeds cognitive capacity.
The symptoms are specific: a buzzing feeling in the head, mental fog, difficulty concentrating, sluggish decision-making, headaches. 14% of AI-using workers reported experiencing it. The distribution was uneven. Only 6% of legal professionals reported it, versus 26% of marketing professionals. The people using AI most intensively are the ones burning out fastest.
Workers with high AI oversight responsibilities reported 14% more mental effort, 12% more fatigue, and 19% more information overload compared to those with lighter AI interaction. Those numbers are concerning on their own. The downstream effects are worse.
Decision fatigue increased 33%. Minor errors went up 11%. Major errors went up 39%.
Read that again. Not 4% more major errors. Not 10%. Thirty-nine percent. The tool we adopted to reduce mistakes is, past a certain threshold of use, causing significantly more of them.
And the people affected know something is wrong, even if they cannot name it. Workers experiencing AI brain fry reported 34% intent to quit, compared to 25% among unaffected workers. That is a 39% higher turnover risk. Organizations are not just getting worse output from overloaded workers. They are losing them.
A necessary caveat: “AI brain fry” is BCG’s term, not a clinical diagnosis. This is survey data, not clinical measurement. The findings are directional and the sample is large enough to take seriously, but they have not been peer-reviewed in a medical journal. What they describe, though, aligns precisely with what I hear from colleagues, clients, and my own experience.
The Three-Tool Threshold
The most actionable finding in the BCG study is simple enough to act on today. Productivity gains from AI tools follow a curve: they increase through the first, second, and third tool. After the third, productivity declines.
Three tools. That is the threshold.
Beyond three concurrent AI tools, workers reported more fatigue, more errors, and less satisfaction. This is not an argument against AI adoption. It is an argument for conscious limits. The instinct in most organizations is to give people access to every tool available and let them figure it out. The data says that approach fails. More tools do not produce more output. They produce more cognitive load, which produces worse output.
We documented a related pattern in The Verification Tax: executives report saving 4.6 hours per week with AI while workers spend 3.8 hours checking it. The net gain is 16 minutes. Now add the BCG data: that checking itself degrades as cognitive load accumulates. The verification loop gets less reliable the longer it runs. The tax compounds.
Why This Feels Like Gambling
Here is where it gets uncomfortable.
Glenn Sanford, CEO of SUCCESS Enterprises, described his experience with AI to SUCCESS Magazine in terms that sound clinical: 12 to 16 hours daily, average sleep of four hours and 44 minutes, a persistent “brain buzz” he could not shake. He developed atrial fibrillation. A heart condition. From overworking with AI tools.
His description of the pull: “The feedback loop was so rapid, I kept wanting to go back, asking ‘Can I make it do that?’ It was almost impossible to stop. Just one more prompt.”
One person’s experience is not epidemiology. Sanford’s case is extreme and individual. But the mechanism he describes is well-understood in neuroscience, and it does not require medical consequences to be relevant to you.
Dopamine releases on the anticipation of a reward, not on receiving it. This is established neuroscience. Dr. Tommy Wood and Dr. Elana Hoffman, speaking to SUCCESS Magazine, connected this to the specific experience of prompting AI models. When you type a prompt, you do not know exactly what you will get back. The response might be brilliant. It might be mediocre. It might surprise you. That unpredictability is the key.
Variable ratio reinforcement. It is the mechanism behind slot machines. Unpredictable rewards trigger stronger dopamine responses than predictable ones. Each time you pull the lever (type a prompt), there is a chance of a jackpot (a response that delights you). The intermittent reinforcement keeps you pulling.
I want to be precise about the evidence here. The dopamine mechanism is inferred from its similarity to known variable reinforcement patterns. Nobody has put AI users in a brain scanner to measure dopamine release during prompting sessions. The analogy is strong, the behavioral pattern matches, but direct neuroimaging evidence does not yet exist. What we have is a well-established mechanism (variable ratio reinforcement), a behavior pattern that maps to it (compulsive prompting), and self-reports from many users describing the same subjective experience.
That subjective experience: “just one more prompt.” The tolerance pattern. Needing more interaction for the same satisfaction. The difficulty stopping even when you planned to stop. The goalposts that keep moving.
I recognize every one of those in myself.
The Jevons Paradox: Why AI Will Not Save Your Time
In 1865, the English economist William Stanley Jevons observed something counterintuitive about coal. As steam engines became more fuel-efficient, the total consumption of coal did not decrease. It increased. More efficient engines made coal-powered work cheaper, which made more coal-powered work economically viable, which increased total demand past the savings from efficiency.
The pattern now carries his name: the Jevons paradox. When technology makes a resource more efficient to use, total consumption of that resource tends to increase, not decrease.
Satya Nadella has referenced this explicitly for AI. Erik Brynjolfsson, one of the foremost economists of technology, frames it directly: “Organizations invent new tasks to absorb surplus capacity.” This is not speculation. It is what the data shows happening.
Fortune reported in March 2026 on the paradox in practice. Mike Manos, CTO of Dun & Bradstreet: “I got the eight hours to two hours, but now I can get 20 hours of work.” Tim Walsh, CEO of KPMG: “That means I can put more volume through my business.” The efficiency gains are real. The time savings are not. Every hour freed by AI gets filled with more AI-enabled work.
AES reduced a 14-day audit to one hour. Google reports 50% of its code is now AI-written. KPMG cut meeting prep time by 75%. Product cycles compressed from 24-36 months to six months. None of these organizations report their people working less.
The math is clear. If AI makes you 5x more productive per hour, and your organization responds by expecting 5x the output, you are not 5x more productive. You are doing 5x the work at the same pace of life. That is not a productivity gain. It is an intensity increase wearing the mask of efficiency.
Yasmeen Ahmad at Google Cloud told Fortune that organizations are “a little bit nervous” about the implications but “keeping quiet.” Keeping quiet because acknowledging the paradox means admitting that the promise of AI (do more with less, free up time, reduce workload) is not playing out the way the pitch decks said it would.
We traced the organizational version of this in The AI Intensity Trap: task expansion, blurred boundaries, increased multitasking. The Jevons paradox is the economic engine underneath all three. As we explored in The Speed Trap, optimizing one part of a system (code writing, task execution, content generation) does not optimize the whole system. It creates bottlenecks elsewhere. In the Jevons case, the bottleneck is you. Your cognitive capacity. Your sleep. Your attention.
The Burnout Numbers
The broader trend lines confirm what the BCG data suggests at the individual level. DHR Global reports that 52% of employees say burnout is reducing their engagement in 2026. That is up from 34% just twelve months earlier. A 53% increase in one year.
TechCrunch reported in February that the first signs of burnout are coming not from AI skeptics or resisters, but from the people who embrace AI most enthusiastically. The early adopters. The power users. The ones who, like me, stayed up until 2 a.m. because they could not stop prompting.
This makes sense once you understand the mechanism. Resistance creates a natural limit: if you do not use the tool much, it cannot exhaust you much. Enthusiasm removes the limit. The people most engaged with AI are the ones most exposed to the dopamine loop, the Jevons paradox, and the cognitive overload the BCG data documents.
Work is bleeding into lunches, into evenings, into weekends. Not because anyone mandated it. Because the activation energy for “just one more prompt” is essentially zero. The tool is always available. The loop is always ready to start.
What You Can Actually Do
This is not the section where I hand you a governance framework. We have those. This is what I am doing, personally, after recognizing the pattern in myself. Some of it is backed by the BCG data. Some of it is personal practice. Take what is useful.
Respect the three-tool threshold. The BCG data on this is the clearest signal: productivity gains evaporate past three concurrent AI tools. I now work with one primary tool per task, occasionally two. Not because other tools would not be useful, but because the marginal return turns negative at the point where my brain starts managing tools instead of doing the work.
Set hard stops. Sanford now limits himself to three or four hours of AI interaction daily, down from 12 to 16. I am not yet that disciplined, but I have started setting alarms. Not aspirational reminders. Hard stops. When the alarm goes, the laptop closes. The loop has to be interrupted externally because it will not interrupt itself.
Name the loop when it starts. The most effective intervention for me has been recognition. When I notice the “just one more prompt” impulse, I say it out loud. “That’s the loop.” It sounds absurd. It works. Naming a compulsion creates a sliver of distance between the impulse and the action. Sometimes that sliver is enough.
Distinguish between using AI and being used by AI. Intentional use looks like: define a task, prompt for it, evaluate the output, close the tool. Compulsive use looks like: open the tool, browse for something to prompt, iterate without a clear goal, keep going because the next response might be better. I catch myself in the second mode more than I would like to admit.
Manager support matters, if you manage people. The BCG study found that manager support reduced fatigue by 15% and clear workload messaging reduced it by 12%. If you lead a team, the most impactful thing you can do is acknowledge that AI overload is real, set explicit expectations about output volume, and model healthy boundaries yourself. Nobody will close their laptop at 10 p.m. if you are sending prompts at midnight.
Let AI replace, not augment, routine tasks. The BCG study found that when AI fully replaced routine tasks (rather than augmenting them), burnout decreased by 15%. The distinction matters. Augmentation means you are still in the loop, reviewing, correcting, managing. Replacement means the task is handled and you move on. Wherever possible, automate fully rather than partially. As we documented in Cognitive Debt, the cost of staying in the loop is not just time. It is comprehension, attention, and eventually, the capacity for good judgment.
The Mirror and the System
The AI Intensity Trap looked at this problem from the outside: what organizations should measure, govern, and design for. This article is the inside view. What it feels like. Why it is so hard to stop. Why the people who love AI the most are the ones most at risk.
Both views are necessary. The organizational response (governance, measurement, boundaries) will not work if individuals do not recognize the pattern in themselves. And individual awareness will not be enough without organizational support. The BCG data shows that clearly: managerial and structural interventions reduce the problem by measurable amounts. Personal willpower alone does not.
The Jevons paradox is not a law of physics. It is a description of what tends to happen when efficiency gains meet unconstrained demand. It is not inevitable. But avoiding it requires deliberate choices at every level: personal, managerial, organizational.
The dopamine loop is not destiny. It is a pattern. Patterns can be recognized, named, and interrupted.
The first step is the simplest and the hardest: closing the laptop when you said you would.
This analysis synthesizes When Using AI Leads to “Brain Fry” (March 2026), The Dopamine Trap: The Rarely Discussed Source of Burnout (SUCCESS, 2026), AI Productivity: Workers Are Working More, Not Less (Fortune, March 2026), AI Brain Fry in the Workplace (Fortune, March 2026), The First Signs of Burnout Come From AI’s Biggest Fans (TechCrunch, February 2026), and Dopamine Loops and LLMs (AllAboutAI, 2025).
Victorino Group helps organizations design the governance layer that turns AI intensity into sustainable advantage. Let’s talk.
All articles on The Thinking Wire are written with the assistance of Anthropic's Opus LLM. Each piece goes through multi-agent research to verify facts and surface contradictions, followed by human review and approval before publication. If you find any inaccurate information or wish to contact our editorial team, please reach out at editorial@victorinollc.com . About The Thinking Wire →
If this resonates, let's talk
We help companies implement AI without losing control.
Schedule a Conversation