The AI Control Problem

Your AI System Needs an Enforcer Too

TV
Thiago Victorino
9 min read

Laura Klein published an essay at Nielsen Norman Group earlier this month with a title that sounds obvious until you sit with it: “Your Design System Needs an Enforcer.” Her argument is that design systems --- no matter how well-architected --- fail without someone actively making people use them. A perfect system that nobody follows is worthless.

She is describing AI governance. She just does not know it yet.

Four Ways Systems Fall Apart

Klein identifies four failure modes that cause design systems to decay. Each one maps directly to a failure mode in enterprise AI governance.

The expertise gap. Design system maintainers see patterns across the entire organization. Individual product teams see their own feature. The same asymmetry exists in AI: a centralized governance team understands model risk, data lineage, and regulatory exposure across the enterprise. Individual departments see their own use case. When each team picks its own model, writes its own prompts, and defines its own acceptable-use boundaries, the result is not innovation. It is fragmentation that nobody can audit.

Compounding deviations. Klein uses a carousel example. One team customizes the standard carousel slightly for their feature. Another team copies that customized version and adjusts it further. A third team forks the fork. Within months, the organization has dozens of carousel variants, each subtly different, none maintained by the design system team.

Replace “carousel” with “AI workflow.” One team builds a customer-service chatbot with its own prompt template. Another team copies that template for sales enablement but tweaks the tone. A third team adapts the sales version for internal HR queries. Within months, the organization has dozens of AI workflows, each with different safety boundaries, different data access patterns, and different quality thresholds. Nobody has a complete inventory. This is not a hypothetical. Sixty-three percent of organizations cannot enforce AI purpose limitations across their systems, according to Kiteworks’s 2026 data governance report.

Local optimization. Product teams optimize for their own metrics. The design system optimizes for product-wide consistency, accessibility, and maintainability. These goals conflict. The team that gets a 3% conversion lift from a non-standard button does not care that they just created a maintenance liability across the product surface.

The AI parallel is identical. A marketing team that gets faster content output from an ungoverned model does not care about the enterprise’s data classification policies. A sales team that connects a model to the CRM without security review does not care about the CISO’s access control framework. Each local optimization creates enterprise risk. Gartner predicts that 40% of firms will face shadow AI security incidents. The mechanism is the same as Klein’s design system drift --- local incentives pulling against systemic coherence.

The support function. Klein makes an underappreciated point: enforcement is not just policing. It gives designers cover. When a product manager pressures a designer to “just make an exception this once,” the enforcer provides institutional backing to say no. Without that backing, designers capitulate, and the system erodes.

The same dynamic plays out in AI governance. When a VP wants to deploy a model without the security review because “we need to move fast,” who has the authority to say no? Without an enforcer --- someone with executive backing and the mandate to hold the line --- the governance framework exists on paper and nowhere else.

Where Klein Stops Short

Klein’s diagnosis is sharp. Her prescription is incomplete.

She argues for a human enforcer: someone with executive authority, engineering alliances, regular review sessions, and a contribution process that includes changes benefiting three or more teams. This is reasonable advice for organizations with five to twenty product teams. It does not scale past that.

More importantly, it treats enforcement as a purely human function. This is the same mistake organizations make with AI governance --- relying entirely on policy documents and review committees when the problem demands structural solutions.

Klein omits three entire categories of governance that the design system community has already proven effective.

Automated enforcement. Design system linting tools catch violations at build time. CI/CD gates reject components that do not conform. These are not substitutes for human judgment, but they handle the 80% of violations that are unambiguous. Brad Frost, who literally wrote the book on atomic design, describes governance as a process that starts with conversation and trust-building --- but the mature end of that process is automation that makes the right thing the easy thing.

Federated governance. Nathan Curtis at EightShapes documented the federated model years ago: a centralized core team maintains the foundational system, while distributed contributors from product teams extend it within defined boundaries. This scales past the twenty-team ceiling that a single enforcer hits. It distributes ownership without distributing authority.

Incentive design. Spotify’s “golden path” approach does not force teams to use standard tools. It makes the standard tools so much easier to use that deviation becomes irrational. The governance is embedded in the developer experience. Teams comply not because someone is watching but because non-compliance is more work.

Klein’s enforcer model covers one leg of a three-legged stool. The other two --- automation and incentives --- are where governance scales.

The Triad That AI Governance Needs

If you accept that design system governance and AI governance share the same structural failure modes, then the solutions should share the same structure too.

Enforcement is the human element. Someone with authority reviews AI deployments, maintains the policy framework, and has the executive backing to block non-compliant implementations. This is Klein’s enforcer, translated to AI. It handles the judgment calls: Is this use case appropriate? Does this model’s risk profile match our tolerance? Is this team ready to operate this system in production?

Automation is the structural element. Policy-as-code frameworks that evaluate AI systems against declared constraints at deployment time. Trust scoring that adjusts autonomy levels based on demonstrated compliance. CI/CD gates that reject model deployments missing required documentation, testing, or approval signatures. A recent Governance-as-a-Service framework proposed on ArXiv (2508.18765v2) formalizes this pattern: declarative policies, continuous trust assessment, graduated enforcement that tightens or loosens based on actual behavior.

This is not speculative. We described a working version of this pattern in a previous essay on this site: encoding governance rules into CLAUDE.md files so that AI agents operate within defined constraints automatically. The rules are version-controlled, auditable, and enforced at the point of execution --- not in a policy PDF that nobody reads.

Incentives are the adoption element. Make governed AI the path of least resistance. Pre-approved model configurations. Vetted prompt libraries. Standardized evaluation frameworks. Internal platforms where the compliant option is also the fastest option. When governance adds friction, teams work around it. When governance removes friction, teams adopt it.

The triad works because each element covers the others’ weaknesses. Enforcement alone creates bottlenecks and resentment. Automation alone misses context and edge cases. Incentives alone rely on goodwill that erodes under deadline pressure. Together, they form a governance architecture that is both rigorous and scalable.

The ROI Nobody Measures

Here is a number that should make executives pay attention. Industry estimates from design system practitioners at Smashing Magazine and Knapsack put design system ROI at 34-50% efficiency gains for design teams. Not because the system itself is magic. Because consistency eliminates rework. When every team uses the same components, you stop rebuilding the same button fourteen times. When every component meets accessibility standards by default, you stop retrofitting compliance into finished products.

AI governance has the same ROI structure, but almost nobody measures it. The cost of ungoverned AI is invisible until it becomes catastrophic: the shadow AI deployment that leaks customer data, the model that makes decisions the legal team cannot explain, the accumulated technical debt from dozens of uncoordinated AI experiments that each solve the same problem differently.

Only about 25% of organizations have comprehensive AI security governance in place, according to the Cloud Security Alliance’s 2025 survey. The other 75% are accumulating the same kind of compounding deviation Klein describes in design systems --- except the deviations involve data access, decision authority, and regulatory exposure rather than carousel variants.

Governance Is Infrastructure

Klein closes her essay with a line worth quoting directly: a design system without enforcement is “a library that collects dust.” She is right. And the insight generalizes.

A governance framework without enforcement is a policy that collects dust. An AI acceptable-use policy without automated gates is a PDF that collects dust. An enterprise AI strategy without incentive-aligned tooling is a slide deck that collects dust.

The pattern is always the same. The artifact is not the governance. The mechanism that makes people follow the artifact is the governance. Klein sees this clearly for design systems. The same structural truth applies to every system that depends on consistent behavior across distributed teams.

The organizations that will operate AI effectively at scale are not the ones with the best policies. They are the ones that build the enforcement, automation, and incentive structures that make those policies real. Governance is not bureaucracy. It is the infrastructure that lets systems scale without falling apart.


Sources

  • Laura Klein. “Your Design System Needs an Enforcer.” Nielsen Norman Group, February 6, 2026.
  • Kiteworks. “2026 Data Governance Report: AI Purpose Limitation Enforcement.” kiteworks.com, 2026.
  • Gartner. “Predicts 2026: AI Security and Risk Management.” gartner.com, 2025.
  • Cloud Security Alliance. “State of AI Security 2025.” cloudsecurityalliance.org, 2025.
  • Brad Frost. “Atomic Design: Governance.” bradfrost.com, 2024.
  • Nathan Curtis. “Federated Design System Governance.” eightshapes.com, 2023.
  • GaaS Framework. “Governance-as-a-Service for AI Systems.” ArXiv 2508.18765v2, 2025.
  • Smashing Magazine / Knapsack. “Design System ROI: Industry Estimates.” Various, 2024-2025.

Victorino Group helps organizations build the governance triad --- enforcement, automation, and incentives --- that AI systems require to operate at scale. If your AI governance exists only as a policy document, that is the gap to close.

If this resonates, let's talk

We help companies implement AI without losing control.

Schedule a Conversation