Cloud Next 2026 Field Notes: Spec-Driven Development, From the Room

TV
Thiago Victorino
7 min read
Cloud Next 2026 Field Notes: Spec-Driven Development, From the Room
Listen to this article

This is the second of five field notes from Google Cloud Next 2026. I went in person. I went humble. The talks at this conference were never going to be revelations — and most of the value of attending in person is exactly that: confirming what is no longer controversial.

The session that lined up most clearly with how we are working at Victorino was the one on spec-driven development. Two engineers from Google walked the room through how internal teams are using written specifications as the contract between humans and AI coding agents. Nothing in the methodology was new. What was useful was watching Google publicly commit to a practice we have been quietly building into client work for months.

I want to record what was said, what struck me as honest, and one connection back to our own practice that I have not seen drawn out anywhere yet.

The Two Extremes They Framed

The opening of the talk drew two ends of a spectrum.

On one end, assisted coding. Human in the driver’s seat. AI completes the next token, the next function, the next test scaffold. Predictable. Slow. Limited leverage, because the human is still the bottleneck for every architectural decision.

On the other end, vibe coding. Natural-language prompts produce systems. Fast. Often impressive in a demo. Frequently unmaintainable, often insecure, and almost never aligned with the team’s actual constraints. The systems work until they meet the second person who has to read them.

Spec-driven development sits between. The argument was that neither extreme survives contact with a real engineering organization. Assisted coding cannot deliver the leverage leadership is asking for. Vibe coding produces artifacts no governance review can sign off on. The middle path is one where the human writes the spec, the AI implements against it, and the spec is the artifact that both sides treat as the source of truth.

I will be honest: I have heard this framing before. So has anyone who has read a design document. What was useful was hearing it said by Google, on a Cloud Next stage, as their internal practice.

The Quote That Anchored the Talk

The slide that landed for me was a Leslie Lamport quote: “To think, we have to write. If you are thinking without writing, you only think you are thinking.”

Lamport has been making this argument for forty years. The TLA+ work, the writing on specification before implementation, the long-running insistence that engineers’ resistance to writing things down is not pragmatism but evasion. None of that is new. What was new was watching it cited as the philosophical anchor for an AI engineering practice — and watching the room nod along.

Specification is not a workflow improvement. It is the act of forcing the thought. The AI is not the reason to start writing specs. It is the reason the cost of not writing them just went up.

What They Said Goes in a Spec

The presenters proposed five sections. I am writing them down because the structure is useful, not because it is canonical:

  1. Product or project principles. The constraints that do not change — security posture, architectural commitments, what is non-negotiable.
  2. Product specification. What the system does. The “what.”
  3. Software architecture. How the system is built. The “how.”
  4. Acceptance criteria and testing. What “done” means and how it is verified.
  5. Task list. The decomposition the agent will work through.

If you have written a design document in the last decade, none of this is unfamiliar. What changes when an AI agent is the implementer is that each section becomes load-bearing in a new way. Section 1 becomes the agent’s policy boundary. Section 4 becomes the agent’s exit condition. Section 5 becomes the agent’s work queue. The document stops being a record of intent and starts being an executable contract.

The Canvas Team Example

The most concrete part of the talk was a production case from Google’s Canvas team — the team building Gemini CLI extensions. The presenter, Yanzhi, walked through the workflow. The team uses roughly fifty sub-agents, each scoped to a narrow responsibility. Product and engineering requirements are tracked in markdown, checked into version control alongside the code. The spec is not a wiki page. It is a file in the repo, reviewed in pull requests like any other artifact.

The wins they reported were specific. End-to-end demo verification — the agent runs the demo, checks the outputs, files what failed. Accessibility fixes — the spec describes the accessibility commitment, the agent identifies violations and proposes patches. Version control workflow — branches, commits, PR descriptions, the rote work of keeping a repo legible.

They quantified the savings as tens of hours per week per team member. I have no way to verify the number from the audience. What I can say is that the workflow they described matches what we see at Victorino when a client team gets the spec discipline right: the agent absorbs the work that no human wanted to do anyway, and the human moves up to writing the spec.

The Quote That Was Not on the Slides

In passing, the presenter cited Dave Anderson, a Google distinguished engineer, on the question of what the most valuable artifact in a spec-driven workflow turns out to be. The answer he gave was: the design document itself.

Not the code. Not the tests. Not the deployment. The design document.

This is the part of the argument that gets resisted in every engineering organization I have worked with. Engineers — myself included, for most of my career — treat the design document as a tax. Write it once, lose it in a wiki, refer to it never. What spec-driven development says is that the document is the durable thing. The code gets regenerated. The tests get regenerated. The agents get rotated. The document survives, and is the only artifact that compounds across iterations.

If that is true, the implication is uncomfortable: the senior engineering output of the next decade is writing, not code.

The Seven Lessons

The talk closed with seven lessons from internal Google use. I will list them flat because the value is in the inventory:

  1. Embrace the mindset shift. Stop thinking of the spec as documentation. Treat it as the deliverable.
  2. Structure the codebase for AI readability. Naming, layout, modularity — what is good for humans is good for agents, only more so.
  3. Draft specs collaboratively. The spec is not written in isolation by an architect; it is co-developed by the team that will own the result.
  4. Check the spec into version control. PRs against the spec, not just against the code.
  5. Use the deepest model for design, lighter models for code generation. The expensive thinking happens upstream of the work.
  6. Manage skills modularly. Sub-agents with narrow scopes; not one omniscient agent.
  7. Back-document on completion. When the work finishes, the spec is updated to reflect what was actually built, not what was originally proposed.

I am reading these and thinking that any engineering organization with a design culture from the 2010s already does five of the seven. The list is not a revelation. It is a permission slip.

My Addition: The Business Prompt Is The Spec

Here is the connection I have not seen drawn out yet, and the reason I am writing this down for our team and clients.

In the work we do at Victorino, the spec is not always a design document. For a meaningful slice of agent-driven work — the kind where a business user is composing tasks for an agent without an engineering layer in between — the spec is the business prompt. The few paragraphs the user writes to define what the agent should do. The constraints. The success criteria. The boundaries.

That prompt is performing all five functions of the spec the Google engineers described. Principles. Specification. Architecture (implicit, inherited from the agent’s tools). Acceptance criteria. Task decomposition. The prompt is doing the work the design document did in the engineering case.

Which means the discipline transfers directly. Lock the prompt before you lock the MCPs. Treat the prompt as the artifact that compounds. Version it. Review it. Back-document when it changes. The MCPs can be swapped. The model can be upgraded. The prompt — the articulation of what the work is — is the thing that has to survive.

We have written elsewhere about agent specs as governance artifacts, about specs for AI agents, about the spec layer in agent governance, and about the symphony control plane. The Cloud Next talk did not reframe any of that. What it did was confirm that the same idea is now showing up inside Google’s own production teams under the same name. The methodology is not new. The public commitment is.

That is enough to take back to clients on Monday.


This analysis synthesizes the Google Cloud Next 2026 session on spec-driven development (Google Cloud, April 2026), the Leslie Lamport quote anchoring the talk (Wikipedia, April 2026), and the author’s in-person notes.

Victorino Group helps teams adopt spec-driven discipline as the governance contract between business intent and AI execution. Let’s talk.

All articles on The Thinking Wire are written with the assistance of Anthropic's Opus LLM. Each piece goes through multi-agent research to verify facts and surface contradictions, followed by human review and approval before publication. If you find any inaccurate information or wish to contact our editorial team, please reach out at editorial@victorinollc.com . About The Thinking Wire →

If this resonates, let's talk

We help companies implement AI without losing control.

Schedule a Conversation