Governance as Advantage

Product Discovery in the AI Era: When Building Is No Longer the Hard Part

TV
Thiago Victorino
9 min read

For decades, product teams have navigated four risks: desirability, viability, feasibility, and usability. The balance between them shaped how we allocated time, budget, and attention. Building software was expensive. Testing viability required significant investment. Iteration on usability was slow.

AI has compressed three of these four risks. The one it hasn’t touched is now the only one that matters.

The Four Risks, Revisited

Teresa Torres codified continuous discovery as a weekly rhythm: interview customers, map opportunities, test assumptions, iterate. Her Opportunity Solution Tree gave teams a visual framework for connecting business outcomes to unmet needs to testable solutions. The method remains sound. What has changed is the cost structure underneath it.

Feasibility used to dominate product conversations. Engineering capacity was the bottleneck. Features that required two sprints of dedicated work now get prototyped in an afternoon. David Hoang, VP of Design (AI) at Atlassian, built the first version of his project Tapestry—a functional CRM with AI capabilities—in a few hours. That timeline would have been unthinkable two years ago.

Viability was expensive to test. You needed a working product, users, and time to gather market signals. Now you can deploy a prototype to production cheaply, collect real usage data, and pivot before committing resources. The feedback loop between hypothesis and evidence has shortened from months to days.

Usability iteration was constrained by design-development cycles. Generating multiple UI variations, testing them with users, and refining required coordinated effort across disciplines. AI-assisted tools compress this cycle. The gap between concept and testable interface has narrowed.

Desirability remains unchanged. No synthetic persona, no simulated user interview, no AI-generated research replaces what happens when you watch a real person interact with your product and describe problems you hadn’t considered. Desirability requires humans talking to humans.

When three of four risks become cheap to address, the remaining one becomes the differentiator.

The Process Problem

Jenny Wen, design lead for Claude.ai at Anthropic and former Director of Design at Figma, raised a pointed observation at the Hatch Conference in 2025: the processes we’ve established are becoming lagging indicators. Teams worship process artifacts—the research report, the design spec, the sprint plan—rather than the outcomes those artifacts were supposed to enable.

This is not an argument against process. It is an argument against process inertia. When the cost of building drops to near zero, processes designed to manage the cost of building become overhead. The discipline shifts from managing what we can build to understanding what should exist.

There is a nuance worth preserving here. Regulated industries still need documented processes. Complex systems still need governance. The claim that “the moment you document a process, it becomes irrelevant” applies to fast-moving product contexts, not to every organizational context. But the directional insight is correct: process should serve discovery, not replace it.

Sketching with Code

Hoang describes a practice he calls “sketching with code”—creating prototypes at varying fidelity levels using actual code rather than design tools. A low-fidelity code sketch might be a simple HTML page with hardcoded data, enough to demonstrate a flow and elicit a reaction. A high-fidelity code sketch connects to real APIs and handles edge cases.

The pen-and-paper sketch still happens. What changes is the medium of communication. Paper captures thinking. Code communicates it.

This is not a new idea—design technologists have prototyped in code for years. What makes it different now is that AI has collapsed the skill barrier. A product manager with no frontend experience can generate a working prototype. A designer can go from wireframe to functional interface without filing a ticket.

The implications are structural. When anyone on the team can produce a testable artifact, the gatekeeping function of engineering capacity diminishes. This doesn’t make engineering less valuable—it makes engineering’s role shift from “building the thing” to “building the thing that scales.”

Prototyping in Production

Hoang takes this further: instead of staging environments, he prototypes in production. He deployed Tapestry as a live application, invited users, and collected real usage data. From that data, he pivoted the product from a traditional CRUD application to an MCP server that integrates with Claude and ChatGPT.

Production as a testing environment is more informative than staging. Real data, real usage patterns, real constraints. But this approach requires discipline. Having something in production does not mean releasing it. It can remain a closed beta. The distinction between “deployed” and “launched” becomes operationally important.

There are legitimate concerns here. Data privacy requirements, security posture, and user expectations vary by context. A consumer-facing MVP for a CRM tool carries different risks than a healthcare application or a financial services product. The principle—test with real conditions—is sound. The application requires judgment about what “real conditions” mean in your specific domain.

The Time Reallocation Question

If AI compresses time spent on feasibility, viability, and usability, where does that time go? The optimistic answer is desirability—more conversations with users, deeper understanding of problems, better judgment about what should exist.

The realistic answer is less clear. Many organizations will not reallocate saved time to discovery. They will reallocate it to speed—shipping more, faster, with less reflection. The compression of build time becomes an acceleration of the ship-and-hope cycle rather than an improvement in the understand-then-build cycle.

This is where organizational discipline matters. Teams that deliberately invest compressed build time into expanded discovery time will build better products. Teams that treat AI as a tool for going faster rather than going deeper will build more products, but not necessarily better ones.

The New Question

When building becomes nearly free, the competitive question shifts. The old question was “Can we build this?”—a question about engineering capacity and technical feasibility. The new question is “Should this exist?”—a question about customer insight, market judgment, and strategic clarity.

This question has always existed in theory. Product managers have always been told to validate before building. But when building was expensive, the question carried practical weight because the cost of being wrong was high. Now that building is cheap, the cost of being wrong is low per attempt but compounds across an organization launching dozens of untested ideas.

Product Discovery is not less important in the AI era. It is more important. It is also harder to justify to organizations that have internalized speed as their primary metric.

The teams that will win are not the ones that build fastest. They are the ones that understand most deeply what is worth building.


This analysis draws on David Hoang’s “How Product Discovery changes with AI” (Proof of Concept, February 2026), Teresa Torres’s Continuous Discovery Habits framework, and Jenny Wen’s “Don’t Trust the Process” keynote at Hatch Conference 2025.

If this resonates, let's talk

We help companies implement AI without losing control.

Schedule a Conversation