- Home
- The Thinking Wire
- Field Notes: The Semantic Layer Stopped Being a BI Topic in Vegas
Field Notes: The Semantic Layer Stopped Being a BI Topic in Vegas
Post 3 of 5 from my Cloud Next 2026 field notes. This one is on the semantic layer — a topic that BI teams have been writing about for twenty years and that AI teams just rediscovered the hard way. I sat through the Looker session because the question I came to Vegas with is unglamorous: when an AI agent queries our warehouse and writes “revenue”, whose definition does it use?
The room was full. The slides were familiar to anyone who lived through the Looker–Tableau–Power BI debates of the last decade. What was different was the framing. Miles, a senior PM on Looker, opened with a slide that I expected to be about dashboards and turned out to be about agents. Alex Plepsow followed and made the argument explicit: the semantic layer is no longer a BI feature. It is the governance plane between your data and the LLM that is about to write a query against it.
I want to walk through what they showed, what I think actually matters, and where I would start if I were running data infrastructure today and was not yet on Looker.
The data chaos framing
The opening was the standard one and it was correct. Enterprises have data spread across BigQuery, Snowflake, Postgres, and a long tail of operational systems. The same business question — “what was our revenue last quarter” — gets a different SQL answer depending on who writes the query, which table they hit, whether returns are netted, and whether the date filter respects fiscal versus calendar quarters.
Humans have lived with that ambiguity for a long time. Two analysts produce two numbers, the meeting argues for fifteen minutes about which is right, and someone eventually wins. The cost is annoyance.
Agents do not produce that argument. They produce a number, attach a confident sentence to it, and move on. If the number is wrong, the confidence is unchanged. The cost shifts from annoyance to operational risk, because the agent does not pause at the moment a human analyst would have.
That is the framing the Looker team led with. I think it is the most honest version of the AI governance story I have heard at any of the keynotes this week.
What the semantic layer actually does
A semantic layer is a translation tier between business questions and SQL. You define, once, what “revenue” means — which table, which columns, which filters, which currency conversion. You define what a “customer” is. You define what “churn risk” is, and the join graph and the aggregation behind it. From that point on, anyone — or anything — that asks for revenue gets the same answer.
The Looker version of this is built around three challenges they say agents struggle with on raw warehouses, and that the semantic layer absorbs:
- Relative date filters. “Last 30 days”, “past quarter”, “year to date”. The agent does not need to reason about fiscal calendars or time zones. The metric definition handles it.
- Complex joins across tables. The agent does not need to discover that the orders fact and the returns fact share a partial key with a known caveat about partial returns. The join is encoded in the model.
- Complex measures. Filtered measures like “revenue minus returns”, or ratios like “gross margin”, live in the model. The agent asks for the named metric. It does not author the SQL that produces it.
The architectural punchline is that the LLM does not need to ingest the dataset to answer questions about the dataset. It reads metric metadata. The data stays where it is. That is a meaningful change. It is the difference between giving the agent a dictionary and giving the agent the entire library.
LookML, and why it is more boring than it looks
LookML is Looker’s modeling language. It is code. It lives in git. It has CI/CD pipelines. Reviews happen on pull requests.
This is the unsexy part of the story and I think it is the most important one. Whatever the semantic layer is conceptually, in practice it has to be versioned and reviewed like any other piece of governed infrastructure. If your “single source of truth” for revenue lives in a UI that anyone can edit, you have a wiki, not a governance plane. LookML, dbt’s semantic layer, Cube — they are not differentiated by elegance of syntax. They are differentiated by whether the change to “revenue” leaves an audit trail.
The same governed surface, once defined in LookML, is exposed to dashboards and to agents through the Looker MCP service. One model, two consumers. That part is genuinely useful. The dashboard your CFO opens on Monday morning and the agent that drafts the Tuesday board narrative are reading the same metric definition.
The conversational analytics layer
Google is unifying conversational analytics across three surfaces: Data Canvas (BigQuery direct), the Looker platform (governed BI plus AI), and Data Studio Pro (BigQuery plus Looker plus spreadsheets). The pitch is that you pick the surface that matches the user, but they all defer to the same semantic backbone when one is present.
The honest read of this is that Google is repositioning Looker. Not as the BI tool that competes with Tableau, but as the governance plane for AI on GCP — wired into what they are calling the universal context layer alongside the knowledge catalog and BigQuery metadata. Whether that repositioning succeeds is a different essay. The architectural intent is clear.
The Overdose case study
The most useful demo was Paul Pritchard from Openhouse Media and Ryan from Overdose Digital walking through a project for Cook Brothers. Three-person team, six months, retail brand. They built what they called a semantic “brain” grounded in profit, not revenue — which is the kind of definitional choice that a marketing team usually never gets to make explicitly.
On top of that brain they built a multi-agent system named Cradle. It generates creative briefs, ad assets, and feedback loops. The framing they used was “see what’s happening, decide, act.” I am skeptical of the “act” half until I see the guardrails, but the “see and decide” half was credible because the foundation was a semantic layer that everyone in the system — humans and agents — referred to.
I will not invent details about the financial outcomes beyond what they showed on stage. The takeaway I am willing to commit to is that a small team with a clear metric definition went further in six months than I have seen larger teams go in two years without one.
My addition: start small, behind an MCP
Here is where I diverge from the keynote.
If you are not already on Looker, my recommendation is not to buy Looker on the strength of this session. The semantic layer is the right idea. Looker is one implementation of it. dbt’s semantic layer is another. Cube is another. Hand-curated SQL views with a dedicated MCP service is another, and for most enterprises I think it is the right place to start.
Pick the five to ten metrics your business actually argues about. Revenue. Active customers. Churn rate. Gross margin. Pipeline coverage. Whatever your specific list is. Define each one — once — as a parameterized query behind a dedicated MCP. Document the definition in plain English next to the query. Wire your agents to that MCP and forbid them from going to the warehouse directly for those concepts.
You will get most of the value of a semantic layer at a small fraction of the cost. You will also discover, in the process of writing those ten definitions, that three of them are contested inside the business and that the contest was masked by everyone using slightly different SQL. That alone is worth the exercise.
Once you know which metrics you actually need governed, then you can decide between Looker, dbt’s semantic layer, or rolling your own at scale. You will be choosing with information instead of choosing on a vendor pitch.
What I left Vegas with
The semantic layer is not a new idea. The reason it suddenly matters is that the population of consumers querying the warehouse just expanded by an order of magnitude, and the new consumers do not pause when their answer is wrong. Governing the metric definition stops being a BI nicety and becomes a precondition for letting agents anywhere near the data.
You can buy that governance plane. You can also build the first version of it in a week with five metrics behind an MCP. The people I trust on this are the ones who started with five metrics first.
This analysis synthesizes the Google Cloud Next 2026 Looker session (Google Cloud, April 2026), Looker product documentation (Google Cloud, April 2026), Overdose Digital (Overdose Digital, April 2026), and the author’s in-person notes.
Victorino Group helps teams design the semantic governance layer between their data and their agents. Let’s talk.
All articles on The Thinking Wire are written with the assistance of Anthropic's Opus LLM. Each piece goes through multi-agent research to verify facts and surface contradictions, followed by human review and approval before publication. If you find any inaccurate information or wish to contact our editorial team, please reach out at editorial@victorinollc.com . About The Thinking Wire →
If this resonates, let's talk
We help companies implement AI without losing control.
Schedule a Conversation