The AI Control Problem

When Interfaces Become Disposable: Where Does Governance Live When Users Skip Your UI?

TV
Thiago Victorino
9 min read

Chris Loy, a product designer and developer, recently described building a custom sleep-tracking visualization for his newborn in roughly two hours. He used an AI coding assistant and FitBit’s public API. The official FitBit app could not show him what he needed. So he built something that could. Then, when the need passed, he threw it away.

This is the premise of disposable software: narrow-purpose tools, built fast by AI, used briefly, discarded without ceremony. Loy frames this as a product strategy insight --- interfaces are cheap, capabilities are durable, monetize the API layer. He is right about the business model. But he is missing the bigger story.

When your users skip your UI and go straight to your API via an AI agent, where does governance happen?

The interface was your last governance surface. Now it is gone.

The Governance Surface You Did Not Know You Had

Most organizations do not think of their UI as a governance mechanism. It is “just the interface.” But consider what lives there.

Consent flows. GDPR cookie banners. HIPAA authorization screens. KYC identity verification steps. Terms of service acceptance. Age verification gates. These are not decoration. They are compliance checkpoints that regulators expect to see.

Rate limiting UX. Progress bars, loading states, “please wait” messages. These are the visible manifestation of controls that prevent abuse. They teach users the system has boundaries.

Audit trails. “Are you sure you want to delete this?” dialogs. Confirmation screens. Multi-step approval workflows. These create records of human intent. They prove a person made a deliberate choice.

Access scoping. Navigation menus that show only what you are authorized to see. Disabled buttons for actions you cannot take. Form fields that enforce business rules. These are authorization controls wearing a visual costume.

When a user interacts with your product through the interface, they pass through all of these checkpoints. When an AI agent calls your API directly, it bypasses every single one.

The Three-Layer Governance Problem

Loy’s product architecture model is useful: capabilities at the bottom, service/API layer in the middle, interface at the top. What he does not say explicitly is where governance enforcement typically lives in this stack.

It lives at the top. In the interface layer. The layer he calls disposable.

This is not an accident. The interface is where humans interact with systems, so it is where organizations built the controls designed for human interaction. Consent requires a screen to display on. Confirmation requires a dialog to click. Audit trails require a timestamp of human action. The interface was the natural home for governance because governance was designed for human users.

AI agents are not human users. They do not read consent banners. They do not pause at confirmation dialogs. They do not create the behavioral signals that audit systems expect. They call the API, get the response, and move on. The governance layer they skip was never optional. It was just in the wrong place.

MCP Makes This Structural, Not Anecdotal

Loy mentions the Model Context Protocol in passing, but MCP is what transforms this from a product design observation into a governance crisis.

Before MCP, users bypassing your interface required technical sophistication. Someone had to understand your API, authenticate properly, and write code to interact with it. The population of users capable of this was small and manageable.

MCP standardizes the bypass. Anthropic introduced MCP in November 2024 as an open protocol for AI agents to connect to external services. By February 2026, it has been adopted by OpenAI, Google DeepMind, Microsoft, and AWS. It was donated to the Linux Foundation’s Agentic AI Foundation in December 2025. The ecosystem is real and growing fast.

An MCP server for FitBit already exists on GitHub. Anyone with an AI agent can connect to FitBit’s services without ever opening the FitBit app. The same pattern is replicating across thousands of services.

According to Gartner, more than 30% of the increase in API demand by 2026 will come from AI tools using large language models. This is not a fringe use case. It is a structural shift in how products are consumed.

And here is the governance problem: MCP defines how clients and servers exchange resources and tools, but it does not define who gets to act, when they can act, or under what conditions. As one enterprise governance analysis puts it, MCP is in a similar place for agents as REST was for APIs --- the interface is helpful, but organizations still need a layer for identity, policy, visibility, and security.

Shadow MCP: Shadow IT for the Agent Era

There is an emerging pattern that should concern every CTO: shadow MCP.

Just as employees once installed unauthorized SaaS tools (shadow IT), they are now deploying MCP servers that connect AI agents to enterprise systems without oversight. A developer wants their coding assistant to access the internal wiki, so they spin up an MCP server. A product manager wants Claude to query the analytics database, so they configure an MCP connection. Each individual action seems reasonable. The aggregate result is an ungoverned mesh of AI-to-system connections that no one has visibility into.

The January 2026 BodySnatcher vulnerability (CVE-2025-12420) in ServiceNow’s Virtual Agent API illustrated the stakes. An unauthenticated attacker could impersonate any user --- including administrators --- using only an email address, bypassing MFA, SSO, and all other identity controls. This was not a theoretical risk. It was a production vulnerability in a major enterprise platform’s AI agent integration.

When your governance surface was the UI, shadow IT meant someone used an unauthorized app. When your governance surface is the API, shadow MCP means an AI agent has unsanctioned access to your production systems.

Contract-First Design: The New Governance Foundation

If interfaces are disposable and governance cannot live there, where does it go?

The answer is contract-first design at the API layer.

Architecture for disposable systems, as described by practitioners building in this paradigm, requires strict schemas that serve as the boundary contract. The interface can be anything --- a custom app, an AI agent, a voice assistant. But the API contract defines what is permitted, what data flows where, and what conditions must be met.

This means:

  • Authentication is not optional. Every API call carries identity, whether from a human user or an AI agent acting on their behalf.
  • Authorization is granular. The API enforces what each caller can do, not the UI. Disabled buttons become 403 responses.
  • Consent is programmatic. If a user must agree to terms before accessing data, the API enforces this, not a checkbox on a form.
  • Audit is automatic. Every API call is logged with identity, timestamp, action, and outcome. No confirmation dialog needed.
  • Rate limiting is architectural. Throttling happens at the API gateway, not through loading spinners in the UI.

This is not a minor refactoring. It is a fundamental shift in where governance logic lives. Most organizations have years of governance embedded in their interface layer that has never been replicated at the API level. The migration is neither cheap nor fast.

The Paradox of Openness

Loy makes an interesting observation about enshittification --- the Cory Doctorow term for platforms degrading service to extract value. He suggests AI agents might reverse this trend by enabling users to build their own interfaces, forcing platforms to keep APIs open.

This is optimistic. The counterevidence is strong.

When Reddit saw AI systems scraping its content via API, it raised API prices dramatically. When Twitter/X realized third-party clients were bypassing its ad-supported interface, it restricted API access and started charging. Platforms that closed their APIs did so for economic reasons. AI agents do not change those economics. They may actually accelerate the closure.

The paradox: the more valuable API access becomes (because AI agents make it more useful), the more platforms will want to control, monetize, or restrict it. Openness is not the natural equilibrium. Governed openness --- access that is documented, authenticated, rate-limited, and monetized --- is.

This is another governance surface. Not in the UI. In the API terms of service, the rate limits, the pricing tiers, and the authentication requirements. Governance follows the access point. When the access point moves from UI to API, governance moves with it.

What This Means For Your Organization

Audit your governance surface. Map every governance control in your product. How many live in the interface layer? How many would survive if users accessed your service entirely through API calls? The gap between those two numbers is your governance debt.

Assume your interface will be bypassed. Not by bad actors. By your own users, using AI agents to get work done faster. If your compliance model depends on users seeing a screen, it is already failing for the users who do not.

Build governance into the API, not on top of it. Consent, authorization, audit, rate limiting --- these are API-layer concerns now. If your API was designed as a developer convenience and your UI was designed as the primary governance surface, you have the architecture backwards for an agent-driven world.

Watch for shadow MCP. Your employees are already connecting AI agents to your systems. You may not know it. Establish an MCP governance framework before the ungoverned connections become the norm.

Treat API contracts as governance artifacts. Your OpenAPI specification is not just documentation. It is your governance boundary. Version it, audit it, enforce it. When the interface is disposable, the contract is the only stable governance surface you have.

The disposable interface is a product design insight. The governance gap it reveals is an organizational crisis. The organizations that recognize the difference will be the ones that survive the transition.


Sources

  • Chris Loy. “AI Makes Interfaces Disposable.” chrisloy.dev, February 14, 2026.
  • Tray.ai. “Enterprise Governance and Security for the Model Context Protocol (MCP).” 2026.
  • The Hacker News. “AI Agents Are Becoming Authorization Bypass Paths.” January 2026.
  • Curity. “API Security Trends 2026: AI, MCP, Authorization and More.” 2026.
  • IAPP. “Vibe Coding: Don’t Kill the Vibe, Govern It.” 2026.
  • Gartner. API demand forecast, 2025—2026.
  • Tuan-Anh Tran. “Architecture for Disposable Systems.” January 15, 2026.

Victorino Group helps organizations build governance into the API layer before their interfaces become optional. If your compliance model depends on users seeing a screen, we should talk. Reach out at contact@victorinollc.com or visit www.victorinollc.com.

If this resonates, let's talk

We help companies implement AI without losing control.

Schedule a Conversation