Notes from Cloud Next 2026: Identity for AI Agents Is the Old Problem Wearing a New Hat

TV
Thiago Victorino
7 min read
Notes from Cloud Next 2026: Identity for AI Agents Is the Old Problem Wearing a New Hat
Listen to this article

I spent three days in Las Vegas at Google Cloud Next 2026. I did it on purpose — every now and then you have to walk into the same room as everyone else who is building what you are building, just to confirm you are pointed in the right direction. This is the first of five posts on the talks I attended. I went humble. I expected to hear refinements, not revelations. That is mostly what I got. And that is exactly the point.

The session that anchored my week was a talk by William, Emilio, and Felipe from Google’s Security Transformations team. The subject: identity controls for AI agents. I want to be honest up front — they did not say anything I had not seen before in some form. What they did was put it in order. For a problem this messy, ordered articulation is its own contribution.

Three Eras, Each Inheriting the Last

The framing they opened with was an identity timeline in three eras.

The first era was humans. Identity meant a username and a password, then a directory, then SSO and MFA. RBAC matured on top. We figured out provisioning, deprovisioning, and audit. By the time most enterprises reached the end of this era, the controls were genuinely good — for humans.

The second era was programmatic identity. Service accounts, API keys, machine identities. The honest history is that we extended human controls onto non-human actors and discovered the fit was rough. Service accounts inherit roles meant for people. Their credentials sit in repositories, in CI variables, on developer laptops. Rotation is a quarterly ritual at best. RBAC works in form but not in spirit, because the principle of least privilege requires someone to actually scope it, and nobody is reading the code that uses the key.

The third era is agents. The talk’s argument — and I think it is correct — is that we are about to repeat the mistake. We will extend service-account controls to agents and discover the fit is even worse. Agents are not deterministic. They synthesize calls. They reason about what to invoke next. They can borrow tools we did not anticipate. The controls that worked for “this script calls this API every six hours” do not work for “this agent decides which API to call based on the conversation it is currently having.”

The eras are useful because they make the inheritance visible. Each new era reused the prior controls, then quickly outgrew them. The mistake is not a vendor problem. It is a generational reflex. Recognizing that reflex is the first piece of work.

Five Pillars of Agent Identity Governance

The middle of the talk was an inventory of where most enterprises currently have nothing in place. Five pillars:

Visibility. Do you know which agents exist? Where they run? Which workloads they belong to? Most teams I talk to discover an agent inventory the same way they discover shadow IT — by accident, in an audit. The pillar is necessary because everything downstream presumes you know who the cast is.

Authentication and authorization. RBAC alone is not enough; the talk pushed ABAC alongside it. Attribute-based access control adds context — which agent, which task, which data classification, which time of day, which network segment. The point is not that ABAC replaces RBAC. The point is that RBAC was built around stable, slow-changing role assignments, and agents need decisions made on attributes that change per call.

Customer data protection. This pillar collapses two things: the data the agent reads, and the data the agent writes back. The asymmetry matters. An agent that reads broadly and writes narrowly is a leakage risk. An agent that reads narrowly and writes broadly is a corruption risk. Treat them as separate controls, not one.

Integration boundary. Where does the agent end and the rest of the system begin? In practice, the integration surface is where most actual incidents happen — not inside the agent, but at the edges where it calls external SaaS, internal APIs, or third-party tools. The boundary needs its own gate.

Logging and monitoring. This sounds obvious. It is not happening. Most agent deployments log what the developer thought to log when they wired the integration up. The audit story falls apart the first time someone asks “who issued this call, on whose behalf, with which prompt context.” Logs need to be designed against the question you will ask in an incident, not the question that fit when you set up the SDK.

I want to be honest — none of these are new categories. Visibility, AuthN/AuthZ, data protection, integrations, and observability are the same five categories you would draw for any system. What was useful was seeing them re-projected onto agents and watching how each one stretches in unfamiliar ways.

The Three-Wave Roadmap

The closing third of the session was a phased rollout.

Wave one is inventory and policy. Find the agents. Classify them. Write the policies that govern what each class can and cannot do. The honest assessment is that most enterprises stall here, because the inventory keeps changing while you write the policy.

Wave two is implement controls. Pick a primitive for each pillar, wire it in, instrument it. Visibility tooling, an authorization layer that supports ABAC, a data-classification gate, an integration broker, an observability pipeline. The talk mentioned BACI — Behavior Certificates for Agents — as a proposed framework for validating prompt origin, tool boundaries, and policy compliance. It is early. The shape of the proposal is right. The implementations are not yet at parity.

Wave three is extend to integrations and SaaS. The most painful wave. Your agent does not live alone. It calls Salesforce, Workday, GitHub, Jira, your data warehouse. Each of those has its own identity model. Federating across them is the work that takes the longest and produces the least visible progress.

The team also cited AGRIM, Cloudflare’s open-source project for agentic AI governance, assurance, and risk management on Kubernetes, as a reference point for how the industry is starting to package these controls. Worth a read.

What I Would Add: Authorization Is Necessary, Not Sufficient

This is the part where I add my own brick.

Everything in the talk is right. None of it is enough. The reason is that authorization governs the call but not the reach. An agent that is properly authorized to query a database can still produce a query that returns the wrong rows because the surrounding system topology gave it line-of-sight to data that should not have been reachable.

Network topology has to do part of the job that identity cannot. We have known this in cloud architecture for two decades. Availability zones, VPC segmentation, private subnets, egress controls. The reason these exist is that identity controls fail silently — a misconfigured role does not throw an exception, it just answers. Topology controls fail loudly. The packet does not arrive. The connection times out. You notice.

For agents, the analog is clear. Run them in segregated network zones. Limit their egress to the precise integrations they need. Put a service mesh between them and the data plane, with policies that match — and exceed — the identity layer. If the identity gate is bypassed by a prompt injection, the topology gate is the second line of defense. We have written about extending zero-trust to agents and about the agent containment stack; the topology argument is the floor under those.

If you walked away from the talk thinking authorization is the answer, you walked away with half the building.

Do This Now

If you are in the room with your platform and security teams next week, here is the version of this talk that costs you nothing to run:

  1. List your agents. If you cannot, the visibility pillar is missing.
  2. For each agent, write the role, the attributes, and the data classification it touches. If RBAC is your only control, ABAC is the next conversation.
  3. Identify the integration boundary. Where does the agent call out? What gate sits there?
  4. Pull the logs. Can you answer “who, what, when, on whose behalf, with what context” for the last 100 agent calls? If not, observability is missing.
  5. Look at the network. Is the agent in a segregated zone with explicit egress? If not, identity is doing work that topology should be sharing.

This is not a vendor pitch. It is an inventory list. The Google team’s contribution was articulating the list cleanly. The work is to walk your own building and count what is in place.

I went to Cloud Next humble. I am leaving more convinced than I arrived that the agent identity problem is not exotic. It is the old problem wearing a new hat. The teams that get ahead are the ones who recognize the hat and stop being surprised by it.


This analysis synthesizes Google Cloud Next 2026 (Google Cloud, April 2026), Identity & Security on Google Cloud (Google Cloud Blog, April 2026), and the author’s in-person notes from the session.

Victorino Group helps enterprises extend identity governance to AI agents before it becomes the next breach surface. Let’s talk.

All articles on The Thinking Wire are written with the assistance of Anthropic's Opus LLM. Each piece goes through multi-agent research to verify facts and surface contradictions, followed by human review and approval before publication. If you find any inaccurate information or wish to contact our editorial team, please reach out at editorial@victorinollc.com . About The Thinking Wire →

If this resonates, let's talk

We help companies implement AI without losing control.

Schedule a Conversation