- Home
- The Thinking Wire
- Design Tools That Bypass the Constraint Layer
Design Tools That Bypass the Constraint Layer
A senior designer opened Figma AI, asked for “a settings panel matching our product,” and got back something that looked exactly right. Brand-safe palette. Familiar spacing. Buttons that would not embarrass anyone in a design review. It was unusable. None of the components existed in the design system. The shadows were one-offs. The button radius was three pixels too large. The “card” was a rectangle with a stroke, not the team’s actual Card component, which carries elevation tokens, focus states, and accessibility labels the AI’s rectangle did not.
The cleanup took longer than building from scratch would have. The designer is still apologizing for it.
This is the failure mode Andrew Martin, CEO of UXPin, named in late April: “The most dangerous design system violation isn’t the one that looks wrong. It’s the one that looks right but isn’t built with your components.” Sam Henri Gold, in Thoughts and Feelings Around Claude Design, names the structural reason: AI design tools were trained on code, not on Figma primitives. They reach for the medium they actually know.
The two essays were written independently. They land at the same conclusion. Design systems work because they create constraints, and AI design tools — Claude Design, Figma AI, Sketch’s recent generators, Lovable, Bolt, v0 — operate outside those constraints by default. The question is no longer whether designers will adopt these tools. They have. The question is what governance surface catches the output before it lands in a pull request.
Three Governance Failures in One Tool Call
Martin catalogues the failure modes plainly. Read them as one chained event, not three independent risks.
The output bypasses review. A designer prompts the AI; the AI produces a screen. The screen was not reviewed against the team’s guidelines because the AI does not know the guidelines exist. It was not reviewed against accessibility requirements because the AI does not know which aria-label patterns the team uses. It was not reviewed against the component library because the AI did not consult the component library. The first reviewer is the designer staring at the output, deciding whether to ship it. That is not a review process. That is taste.
The output reads as compliant. The dangerous artifact is the one that looks professional, feels on-brand, and reads as if it came from inside the system. It uses your color tokens — close enough that a quick scan does not flag them. It uses your spacing — close enough that a designer who has not memorized the eight-pixel grid does not notice. It uses your type ramp — close enough. The output is not aggressively wrong. It is plausibly right. Plausibly right is the failure mode that survives review.
The constraints disappear. A design system’s purpose is to make some choices impossible. Designers cannot use a button shape that does not exist. Designers cannot use a shadow that the system does not define. The system is a constraint layer, and the constraint is what makes the system useful. AI generators sit on top of that layer and produce free-form output. The constraint becomes optional. Once the constraint is optional, the system is decorative.
Martin reports a designer who “spent more time correcting the AI’s interpretation of their design system than it would have taken to build from scratch.” The line that follows is the one to internalize: “The AI was fast. The cleanup is slow.”
The Reason Figma Fell Out of the Training Data
Sam Henri Gold’s essay names a structural fact most design leaders have not absorbed. Figma’s file format is “locked-down, largely-undocumented, painful to work with programmatically.” That is not a complaint about API ergonomics. It is the reason Figma was not a meaningful part of LLM training datasets.
LLMs were trained on code. Public repositories. Stack Overflow. Open-source component libraries. They saw billions of <button className="primary"> declarations. They saw component definitions, prop interfaces, design token files in JSON. They did not see Figma frames. They did not see auto-layout configurations. They did not see the hundreds of hours of design system documentation that lives inside Figma libraries the public does not have access to. The training corpus has a hole exactly where design system primitives should be.
This explains the symptoms. AI design tools generate output that looks like a designer’s instinct because the underlying model is reaching for code patterns dressed up as visual artifacts. Claude Design produces components by writing JSX or HTML and rendering it. Figma AI produces frames by inferring shapes from prompts the model interprets as if they were UI code. The medium of generation is code. The output is a render of that code. The design system — which lives in Figma, in Storybook, in the team’s internal Notion, in the heads of three senior designers — is not part of the model’s reasoning.
Henri Gold extends the argument. Source of truth is migrating back to code. Some teams already work directly in JSX with Tailwind because the AI tool reasons better there. Some Figma deployments now have 946 color variables with nested aliasing — the complexity has become its own governance vulnerability, because no AI tool can keep that mental model accurate, and humans rarely can either. The tools that generate the cleanest output are the ones working with the simplest, code-native constraint surfaces.
The Constraint Is What Survives
Both essays converge on the same governance principle. Designer discipline does not survive AI tooling. The constraint layer does, but only if the constraint is structural rather than aspirational.
Aspirational constraints are documentation. “Use the Card component for grouped content.” “Stick to the eight-pixel grid.” “Use type ramp tokens, never raw font sizes.” These work as long as the human at the keyboard is the one making the choice. AI generators are not reading the documentation. They are not consulting the design system Notion page. They are generating from a code model, and they will produce whatever the prompt suggests.
Structural constraints are different. They make violations impossible, not discouraged. A design system that exposes an enforced component palette to the AI tool, where the model can only place components that exist in the library, produces output that is by construction inside the system. A design tool that refuses to render an undefined token returns a blank rectangle until a real token is supplied. The designer cannot ship a non-system component because the tool cannot generate one.
This is the same governance principle we have written about for design system enforcement and for design systems as governance infrastructure. The shape of the principle does not change between policy and tooling: the constraint, not the policy, is what holds. We argued the broader case in the agent-era design governance essay. The April releases from UXPin and the practitioner notes from Henri Gold are the design-tool-specific instance of the same conclusion.
What does this look like operationally?
Constrained generation. The AI design tool is given access to the component library as the only legal output set. Every generation is a placement of an existing component, with existing tokens, in a layout the system already supports. The model can refuse to generate; it cannot generate something the system does not allow.
Token-aware output. The tool ingests the design token JSON and produces output keyed to those tokens, not to raw values. A color in the output is color.background.primary, not #0F1B2D. If the token does not exist, the tool surfaces the gap to the designer rather than inventing the value.
Library-first prompting. The prompt template includes the component library as context. “Build a settings panel” becomes “Build a settings panel using {Card, Section, Field, Toggle, Button} from the library.” The model reasons about composition, not about visual primitives.
Structural review hooks. The output, before it reaches a designer’s review, is checked against a parser that confirms every used component is in the system, every token is in the registry, every accessibility attribute is set. The check is automated. It runs on every generation. Failures block the artifact from leaving the tool.
These are not aspirational. They are architectural. The teams that ship them get AI design tools that strengthen the design system. The teams that do not get a slow accumulation of plausibly-right artifacts that no reviewer flagged because every individual one looked fine.
The Question for the Design Leader
If you lead a design organization, the question is not whether your team will use these tools. They will. They already do. The question is whether the constraint layer between the AI tool and the production design is structural or aspirational.
Walk the path one of your senior designers walks today. They open Figma AI or Claude Design. They prompt. They get back an artifact. What checks the artifact before it lands in a design review? If the answer is “the designer’s eye,” the constraint is aspirational. If the answer is “an automated parser that validates components, tokens, and accessibility,” the constraint is structural.
Plausibly-right output is the failure mode the eye does not catch. Andrew Martin’s line is the line to write on the wall: the dangerous violation is the one that looks right but is not built with your components. Sam Henri Gold’s structural reason explains why this happens by default and not by accident. The fix is the constraint layer the AI tool cannot bypass, because the tool is making placements inside it rather than producing free-form output that needs to be retrofitted into it.
Design systems work because they create constraints. AI design tools work despite design systems unless the system has taught the tool what is and is not allowed. The teams that win the next year of design tooling are the ones that will have already moved that conversation from “we should have guidelines” to “we have a parser that runs on every generation.”
The cleanup is slow. Build the constraint instead.
This analysis synthesizes AI Design Tools That Ignore Your Design System Create More Problems Than They Solve (Andrew Martin, UXPin, April 2026) and Thoughts and Feelings Around Claude Design (Sam Henri Gold, April 2026).
Victorino Group helps design and platform leaders adopt constraint-based AI tooling that protects design systems instead of eroding them. Let’s talk.
All articles on The Thinking Wire are written with the assistance of Anthropic's Opus LLM. Each piece goes through multi-agent research to verify facts and surface contradictions, followed by human review and approval before publication. If you find any inaccurate information or wish to contact our editorial team, please reach out at editorial@victorinollc.com . About The Thinking Wire →
If this resonates, let's talk
We help companies implement AI without losing control.
Schedule a Conversation