- Home
- The Thinking Wire
- Design Debt in AI Products: When Interface Decisions Shape What Users Believe
Design Debt in AI Products: When Interface Decisions Shape What Users Believe
Forrester’s 2025 survey found that 75% of technology decision-makers expect technical debt to reach moderate or high severity by 2026, driven largely by AI complexity. There is no equivalent survey for design debt. Nobody tracks it. Nobody budgets for it. Nobody reports it to the board.
That absence is the story.
Arin Bhowmick, SAP’s Chief Design Officer (previously at IBM and Oracle), published an essay in March 2026 arguing that design debt has become as dangerous as technical debt. His argument carries weight because of where he sits. This is not a designer complaining about pixel consistency. This is a C-suite executive at one of the world’s largest enterprise software companies saying: we have a structural problem, and we are not measuring it.
What Design Debt Actually Is
Technical debt has a clean definition. You took a shortcut in code. You know the shortcut exists. You can estimate the cost to fix it. Design debt is murkier. It accumulates when teams make interface decisions under pressure, without principles, or without ownership. Inconsistent patterns. Conflicting interaction models. Navigation that made sense for three features but collapses at thirty.
We have written about cognitive debt in engineering, the invisible cost that accumulates when teams lose understanding of their own AI-generated systems. Design has its own version, and it is less visible. Cognitive debt hides in the heads of engineers. Design debt hides in the experience of users who cannot articulate why the product feels wrong.
Bhowmick’s framing is precise: design debt “doesn’t announce itself, but sits there, quietly, underneath every product decision you make.” Technical debt announces itself. Builds break. Tests fail. Performance degrades. Design debt is quieter. Users leave. Adoption stalls. Support tickets multiply. The metrics move in the wrong direction, but nobody connects them back to the accumulated weight of a thousand small interface compromises.
Why AI Products Are Different
In a traditional software product, design debt causes friction. In an AI product, design debt shapes belief.
This is Bhowmick’s sharpest observation. When an AI product presents a recommendation, the interface determines whether the user treats it as a suggestion or a directive. When an AI system surfaces a confidence score, the visual treatment determines whether the user understands uncertainty or ignores it. When an AI tool automates a decision, the interaction pattern determines whether the user maintains agency or surrenders it.
“In an AI product, [design decisions] shape what people believe,” Bhowmick writes. A poorly designed confidence indicator does not just create bad UX. It creates misplaced trust. A recommendation shown without context does not just frustrate power users. It teaches everyone that AI outputs arrive without explanation and should be accepted without question.
This is governance territory. We have argued that design without governance is decoration. The reverse is also true: ungoverned design in AI products is active misinformation about what the system knows and does not know.
The Ownerless Syndrome
Bhowmick identifies a pattern he calls the “ownerless syndrome.” In organizations building AI products across many teams, design debt accumulates fastest in the spaces between ownership boundaries. Team A builds the input interface. Team B builds the processing logic. Team C builds the output display. Nobody owns the end-to-end experience.
Each team optimizes locally. The input team creates a clean form. The processing team builds reliable models. The output team designs clear visualizations. But the seams between these components are where design debt collects. The form promises precision the model cannot deliver. The visualization implies certainty the processing never claimed. The user reads these contradictions as the product lying to them.
In traditional software, seam problems cause confusion. In AI software, seam problems cause misplaced trust or unwarranted skepticism. Both are governance failures.
Taste as the Missing Governance Layer
Joshua Leigh, writing in the same week as Bhowmick, published a complementary argument about taste. His thesis: as AI removes production friction, the ability to produce is no longer the differentiator. Judgment is.
Leigh quotes Brian Eno: “When you remove all constraints from people they will behave in some especially inspired manner. This doesn’t seem to be true.” AI can generate interfaces, layouts, component variations, and interaction patterns at speed. Without taste, that speed produces more design debt, not less. Every AI-generated screen that ships without contextual judgment adds another layer of inconsistency to the product.
Taste, in Leigh’s framing, is not aesthetic preference. It is contextual judgment. Knowing what to leave out. Knowing which pattern fits this moment for this user in this workflow. Susan Sontag called it “a logic without proofs.” That logic functions as governance. It is the human filter that decides which AI outputs are appropriate and which are technically correct but experientially wrong.
We explored how design systems became governance infrastructure when Figma opened its canvas to AI agents. Design systems constrain what agents can build. Taste constrains what humans should approve. Together, they form a governance layer that neither policy documents nor automated checks can replace.
The Measurement Problem
Bhowmick references Alicja Suska’s debt.design framework and Austin Knight’s concept of “reciprocal awareness” as starting points for measuring design debt. These are useful contributions. But the measurement problem is more fundamental than any single framework can solve.
Technical debt has proxies: build times, test coverage, dependency freshness, code complexity scores. Design debt has no standard proxies. User satisfaction surveys are lagging indicators. Usability testing is expensive and intermittent. Heuristic evaluations depend on evaluator expertise.
For AI products specifically, the missing metric is belief accuracy. Does the user’s understanding of what the AI is doing match what the AI is actually doing? When there is a mismatch, you have design debt. The interface is telling a different story than the system.
No organization we are aware of measures this systematically. That is the Forrester deficit: technical debt gets a survey, a severity rating, and a board-level conversation. Design debt in AI products gets nothing, despite shaping something far more consequential than system performance. It shapes what people believe machines can do.
What Changes
Three things follow from treating design debt as a governance concern rather than an aesthetic one.
First, design debt belongs in the same review cadence as technical debt. Quarterly at minimum. If your engineering team reports debt levels to leadership, your design team should report the same. Different metrics, same accountability structure.
Second, AI products need explicit ownership of the belief layer. Not just the input interface, the processing layer, and the output display. Someone owns the question: does this product accurately represent its own capabilities and limitations? That is a design question with governance consequences.
Third, taste cannot be automated. Design systems constrain the building blocks. Automated checks catch inconsistencies. But the judgment of whether an AI product’s interface accurately represents its capabilities to its users requires human evaluation. You can scale production with AI. You cannot scale judgment.
This analysis synthesizes Arin Bhowmick’s Design Debt Is Now as Dangerous as Technical Debt (March 2026) and Joshua Leigh’s Taste Is Not a Feature (March 2026), with data from Forrester’s 2025 Technology Debt Survey.
Victorino Group helps organizations build governance into AI products before design debt compounds into user trust failures. Let’s talk.
All articles on The Thinking Wire are written with the assistance of Anthropic's Opus LLM. Each piece goes through multi-agent research to verify facts and surface contradictions, followed by human review and approval before publication. If you find any inaccurate information or wish to contact our editorial team, please reach out at editorial@victorinollc.com . About The Thinking Wire →
If this resonates, let's talk
We help companies implement AI without losing control.
Schedule a Conversation