- Home
- The Thinking Wire
- Shadow AI Is Not the Problem. Your Missing Governance Is.
Shadow AI Is Not the Problem. Your Missing Governance Is.
A WalkMe survey of SAP users in August 2025 found that 78% of employees use AI tools their employer has not approved. Only 7.5% had received extensive AI training from their organization.
The standard response is to treat this as a compliance problem. Employees are breaking the rules. Lock it down. Block ChatGPT at the firewall. Add a clause to the acceptable use policy.
That response misidentifies the cause. Employees are not defying their organizations. They are filling a vacuum their organizations created.
The Inversion
Most coverage of shadow AI follows a predictable arc: employees do unauthorized thing, company faces risk, solution is more control. The framing puts employees in the role of offenders and IT security in the role of enforcers.
Flip the framing. If 78% of your workforce has found tools that make them more productive, and your organization offered no sanctioned alternative, the governance failure belongs to leadership. Not to the employee who discovered that a chatbot can summarize a 40-page contract in seconds.
CybSafe and the National Cybersecurity Alliance surveyed over 7,000 workers in 2024 and found that 38% had shared sensitive company information with AI tools without permission. That number is alarming. It should also be expected. When people have access to powerful tools and no guidance on how to use them, they use the tools and skip the guidance.
The distinction matters because the solution changes depending on where you locate the cause. If the cause is employee misbehavior, the solution is enforcement. If the cause is institutional absence, the solution is building what was never built.
Beyond Data Leakage
Shadow IT was about unauthorized software. Somebody installed Dropbox instead of using the approved file share. The risk was data in the wrong place.
Shadow AI introduces a different category of risk that most organizations have not grasped. When an employee pastes a client proposal into ChatGPT for editing, the data leakage risk is real but familiar. Security teams know how to think about data leaving the perimeter.
The unfamiliar risk is epistemic. When an employee uses an AI to draft a financial analysis, summarize legal precedent, or evaluate a vendor, the AI’s reasoning becomes part of the organization’s decision-making. Nobody reviewed the model’s assumptions. Nobody validated the training data. Nobody asked whether the output reflects the organization’s risk tolerance or somebody else’s.
As we explored in The Trust Gap Is the Governance Gap, 84% of developers use AI tools while only 33% trust the output. Shadow AI is the extreme version of this paradox: not just usage without trust, but usage without any organizational oversight at all. The trust question never gets asked because the usage is invisible.
This is what makes shadow AI categorically different from shadow IT. Unauthorized software stores data in the wrong place. Unauthorized AI introduces unauthorized reasoning into institutional decisions. One is a data problem. The other is an epistemology problem.
The Dual-Cost Trap
Organizations without AI governance face an uncomfortable choice. Both options are expensive.
Path one: let shadow AI grow unchecked. IBM’s 2025 Cost of a Data Breach Report found that breaches involving shadow AI cost USD 670,000 more than traditional breaches. The same report found that 63% of breached organizations had no AI governance policy. These numbers are connected. Ungoverned AI creates attack surface that traditional security controls were not designed to detect.
IBM sells watsonx.governance, so they benefit from this narrative. Their methodology (19 years, hundreds of real breaches) is sound, but the commercial interest is worth noting. The numbers still tell a structural story: organizations without governance spend more when things go wrong.
Path two: block AI entirely. AIMakers estimated in 2025 that the cost of not adopting AI runs between USD 100,000 and USD 500,000 per year in hidden productivity losses. Employees leave. McKinsey’s 2025 research found that 42% of companies abandoned most AI projects, up from 17% in 2024. The blocking path does not eliminate AI usage; it just drives it further underground where it becomes even less visible.
In Governance Gates Enterprise AI, we examined how enterprise AI adoption stalls not because models lack capability but because organizations lack permission frameworks. Shadow AI is the demand-side mirror of that supply-side problem. Employees want to use AI. The organization has not created a governed path to do so. The employees route around the obstruction.
Both paths converge on the same conclusion. The cost is not AI. The cost is the absence of governance.
The Regulatory Multiplier
The EU AI Act introduced fines of up to EUR 35 million or 7% of global annual revenue, whichever is higher. That exceeds GDPR penalties. For a company with EUR 1 billion in revenue, the maximum fine is EUR 70 million.
When shadow AI triggers a regulatory violation, the organization cannot claim ignorance. “We didn’t know our employees were using AI” is not a defense. It is an admission of negligence. The regulator’s question will be simple: what governance did you have in place?
Gartner projects that AI governance spending will reach USD 492 million in 2026 and exceed USD 1 billion by 2030. That spending is not optional investment. It is the cost of operating in a regulatory environment where ungoverned AI usage creates personal liability for executives and board members.
The same Gartner research (October 2025) projects that 40% of enterprises will experience shadow AI incidents by 2030. For organizations that have built governance, these incidents are containable. For organizations that have not, each incident carries the full weight of regulatory exposure.
What Governance Actually Looks Like
The organizations getting this right share three patterns.
Guardrails instead of gates. Shopify and Klarna mandated AI usage across their organizations with sanctioned tools and clear guidelines. They did not ask for permission requests. They built environments where AI use was expected, monitored, and bounded. The difference between “you may not use AI” and “you must use AI through these channels” is the difference between a policy that creates shadow AI and one that eliminates it.
Embedded principles, not approval workflows. Policy-as-code (automated enforcement of data handling rules, model access controls, and usage logging) removes the friction that pushes employees toward unauthorized tools. If the sanctioned tool is harder to use than the unsanctioned one, employees will choose the easier path. Every time.
Centralized visibility with decentralized execution. AI sandboxes where employees can experiment with approved models using non-sensitive data. Centralized AI gateways that log which models are queried, with what data, and for what purpose. The goal is not surveillance. It is the organizational equivalent of having a fire extinguisher: you hope you never need the logs, but when you do, they exist.
Organizations with mature governance frameworks keep AI systems in production three times longer than those without, according to combined Gartner and ISACA research from 2025. Governance does not slow AI adoption. It makes adoption stick.
The Uncomfortable Truth
As we argued in AI Governance IS Cybersecurity, treating governance as separate from security operations creates structural vulnerability. Shadow AI is the most common expression of that vulnerability. Every unsanctioned AI interaction is an unmonitored endpoint, an unaudited data flow, an unreviewed decision input.
The uncomfortable truth is that shadow AI is not an employee problem. It is a leadership problem. The 78% of employees using unapproved tools are doing what rational actors do when institutions fail to provide structure: they self-organize.
The question for executives is not “how do we stop shadow AI?” That question has already been answered by every organization that tried to block consumer AI and failed. The question is: “What governed alternative are we offering, and why haven’t we built it yet?”
Every month without an answer makes the next breach more expensive, the regulatory exposure wider, and the competitive disadvantage deeper. The organizations that built governance first are already compounding the benefits. The rest are compounding the risk.
This analysis synthesizes the WalkMe/SAP Shadow AI Survey (August 2025), IBM Cost of a Data Breach Report 2025 (July 2025), CybSafe/National Cybersecurity Alliance Oh Behave Report (October 2024), Gartner AI Governance Spending Forecast (February 2026), McKinsey State of AI Report (2025), and EU AI Act (2024).
Victorino Group helps organizations build AI governance that enables innovation without losing control. Let’s talk.
If this resonates, let's talk
We help companies implement AI without losing control.
Schedule a Conversation