The Design Twin Problem: Who Governs What AI Builds?

TV
Thiago Victorino
8 min read
The Design Twin Problem: Who Governs What AI Builds?
Listen to this article

Alessandro Molinaro opens his recent essay with a provocation that most design discourse avoids: AI is not killing design. It is revealing what design had already become. A discipline that once stretched from service blueprints to organizational systems had contracted, over two decades, into the production of screens.

AI compresses that production layer. And compression exposes the narrowing.

What Got Compressed, and What Got Exposed

The data supports the framing. Zhou et al. (2024) found that AI-assisted tools significantly reduced the cognitive load of designers’ work. NNG’s 2026 analysis by Gibbons and Wang confirmed that the design process is not dead but compressed. Peter Skillman, speaking at WebSummit 2025, described a field where leadership must evolve beyond interface production.

None of this is new if you have been watching. We argued that design systems are now governance infrastructure when Figma opened its canvas to agents through MCP. We made the case that design without governance is decoration in response to McKinsey’s incomplete diagnosis of AI’s scaling problem. Both pieces focused on what governance means for design tools and design organizations.

Molinaro’s essay goes somewhere different. He introduces the concept of a Design Twin and immediately raises the question neither tool vendors nor design leaders are answering: who governs this thing?

The Design Twin Is Not a Digital Twin

The distinction matters and is easy to miss.

A Digital Twin is an engineering artifact. It models physical systems. Sensors provide data. The twin simulates behavior. Inputs are measurable, outputs are predictable, and the whole system runs on structured data with known refresh cycles.

A Design Twin, as Molinaro defines it, is built from qualitative residue. The hesitation before a user answers a question. The body language that contradicts what someone says in an interview. The distance between stated preferences and observed behavior. It is grounded in a specific product audience and requires continuous refreshing from real human interaction.

This is where the governance problem begins. Engineering twins have clear data pipelines, defined inputs, and measurable drift. Design Twins inherit none of that infrastructure. They depend on qualitative data that resists automation, degrades invisibly, and is difficult to audit.

Static Decay: The Terminal Risk

Molinaro introduces a term for what happens when a Design Twin goes stale: Static Decay. When the qualitative data feeding a Design Twin is six months old or more, the twin stops representing reality. It becomes a frozen snapshot dressed up as current understanding.

The danger is subtle. A stale engineering model produces obviously wrong outputs. A bridge simulation that does not account for new load patterns breaks visibly. A stale Design Twin produces plausible outputs that feel right. The recommendations sound reasonable. The personas match expectations. The journey maps look professional. Nothing triggers an alarm because qualitative staleness does not produce error messages.

This is the design equivalent of what we described in our analysis of design debt in AI products: interface decisions that shape user beliefs without anyone governing the shaping process. A Design Twin running on stale data makes confident assertions about user needs that no longer reflect actual user behavior. The AI generates empathy that was never observed. Hallucinated empathy.

Italy’s Government Services: Beautiful UI, Broken Journeys

Molinaro’s strongest example is Italy’s digital government infrastructure. The interfaces are clean. The visual design is modern. And the service journeys are terrible.

Consider the CIE digital identity system. Activating it requires a PIN and PUK code. The PIN arrives split between a physical letter and a digital activation flow. The PUK comes separately. For a digitally fluent user, this is annoying. For an elderly citizen, it is a wall.

No amount of UI polish fixes this. The problem is not at the screen level. It is at the service level, in the journey architecture that someone designed (or failed to design) before any interface work began. AI can generate beautiful screens for this broken journey in seconds. It cannot fix the journey because the journey requires the kind of systemic design thinking that the discipline abandoned when it narrowed to pixels.

This is what compression exposes. When AI handles the production layer, the remaining work is the hard work. Service design. Journey architecture. Organizational design. The things that require understanding humans in context, not generating components from a token library.

The Infinite Feedback Loop

The most alarming concept in Molinaro’s essay is what he calls the Infinite Feedback Loop. AI recruitment agencies now use AI-generated participants who are interviewed by AI agents. The research data feeding product decisions is AI talking to AI. A hall of mirrors that has divorced itself from human reality entirely.

This is not a hypothetical risk. It is happening now. And it connects directly to the Design Twin governance problem. If a Design Twin is fed by research that was conducted by AI, analyzed by AI, and synthesized by AI, at what point does it stop being a twin of anything real? The twin becomes a reflection of the AI’s training data, not of actual users.

Governance here means provenance. Where did this qualitative data come from? Who observed it? When? Was a human in the room, or was the “room” a simulated environment? These are questions that no design tool answers today. No design system enforces them. No design review process checks for them.

Chat Interfaces: A Regression in Disguise

Molinaro makes a sharp observation about chat interfaces that deserves attention. The industry celebrates conversational UI as innovation. He calls it what it is: a command-line interface wearing a conversation costume. LUI is CLI with better marketing.

The regression is real. We moved from graphical interfaces that reduced cognitive load through visual affordances back to text-based interfaces that require users to articulate precise requests in natural language. For expert users, this is fine. For the majority of people who interact with software, it is a step backward in accessibility and usability.

This connects to the governance argument in a specific way. When organizations deploy chat-based AI without questioning whether chat is the right interface, they are making a design decision by default. The tool vendor chose the interface. The organization accepted it. Nobody governed the choice. As we documented in enterprise case studies, the companies getting results from AI in design are the ones making deliberate interface decisions, not accepting defaults.

Designers as Governance Practitioners

Here is where Molinaro’s argument converges with what we have been building across our design governance series.

If AI compresses the production layer, designers who only produce are redundant. That part is obvious and has been discussed to exhaustion. The less obvious implication is what designers become if they survive the compression.

Molinaro’s answer: guardians of the data’s vitality. People who ensure the Design Twin stays fresh, stays grounded in real observation, and stays connected to actual human experience. This is a governance role. Not a creative role. Not a production role. A role defined by maintaining the integrity of the qualitative data that feeds AI-assisted design decisions.

The parallel to what we described with design systems is direct. Design systems became constraint layers that govern what AI agents can build visually. Design Twins, if they mature, would become constraint layers that govern what AI agents assume about users. One constrains the output. The other constrains the input. Both require maintenance, version control, freshness checks, and accountability structures that do not exist yet.

What Governance Actually Requires

A Design Twin governance framework needs four capabilities that the design discipline does not currently possess.

Data provenance tracking. Every qualitative input feeding the twin needs a source, a date, and a method. Was this insight from direct observation, a survey, an AI-synthesized summary, or an assumption carried forward from three product cycles ago? Without provenance, you cannot assess freshness. Without freshness assessment, Static Decay is guaranteed.

Staleness detection. Engineering systems have monitoring. When a data pipeline goes stale, alerts fire. Design Twins need equivalent infrastructure. If the last direct user observation is eight months old, someone needs to know. If the research feeding a persona was conducted in a market that has since shifted, the twin should flag it. This does not exist in any design tool on the market today.

Hallucination boundaries. When the twin does not have data, it should say so. The most dangerous output is a confident recommendation built on no evidence. AI systems are good at generating plausible-sounding empathy. Governance means building constraints that prevent the system from asserting understanding it does not have.

Human verification loops. Not every insight needs to come from direct observation. But some percentage must. A Design Twin that is never refreshed by real human contact is a fiction engine. Governance means defining what that percentage is, measuring it, and enforcing it.

The Governance Beyond Engineering Pattern

This essay extends a pattern we have been tracking. AI governance started as an engineering discipline. Model evaluation, prompt testing, output monitoring. Then it moved into design systems as governance infrastructure. Then into design’s relationship with organizational decision-making.

The Design Twin concept pushes governance further into practitioner territory. It is not about what the AI builds or how the design system constrains it. It is about what the AI believes about users and who ensures those beliefs are grounded.

This is governance at the input layer, not the output layer. And it is governance that requires skills most governance teams do not have: qualitative research methods, ethnographic observation, service design thinking. The people who know how to do this work are designers. The infrastructure they need to do it accountably does not exist.

Building that infrastructure is the work.


This analysis synthesizes What AI Exposes About Design (March 2026).

Victorino Group helps organizations govern AI across every business function, not just engineering. Let’s talk.

All articles on The Thinking Wire are written with the assistance of Anthropic's Opus LLM. Each piece goes through multi-agent research to verify facts and surface contradictions, followed by human review and approval before publication. If you find any inaccurate information or wish to contact our editorial team, please reach out at editorial@victorinollc.com . About The Thinking Wire →

If this resonates, let's talk

We help companies implement AI without losing control.

Schedule a Conversation