The Most Valuable Hire You're Not Making

TV
Thiago Victorino
6 min read
The Most Valuable Hire You're Not Making
Listen to this article

A $6 billion AI company deployed a customer-facing agent. The agent quoted incorrect pricing. Nobody noticed for a year.

Not because the technology failed. Because nobody owned the deployment.

Jason Lemkin shared this example at SaaStr in March 2026 to illustrate a problem hiding in plain sight across the industry. Companies are deploying AI agents. They are not governing them. The result is not catastrophe. It is slow, invisible erosion: wrong prices, bad data, hallucinated policies, all compounding quietly while leadership celebrates “AI adoption.”

The Third Era Needs a Different Role

Lemkin frames AI deployment in three phases. The first, in 2023, required deep technical skill. You needed engineers to build anything useful. The second phase, roughly 2024 through early 2025, was the prompt engineer era. Specialized operators who understood how to coax performance from language models. That role is already fading.

The third phase is now. Lemkin calls it the generalist era. If you have personally deployed a piece of enterprise software in the last three to five years, you can deploy any AI agent on the market today. The technical barrier has collapsed. Deployment no longer requires engineering talent.

This sounds like progress. It is also the source of the problem.

When deployment was technical, the deployers were engineers. Engineers build monitoring. They write tests. They think about failure modes. When deployment became accessible to generalists, those instincts did not transfer. The tools got easier. The governance did not follow.

A Hiring Problem, Not a Policy Problem

The conventional response to AI governance is policy: review boards, usage guidelines, approval workflows. We have written about this before, examining how mandates produce compliance without capability. Lemkin’s argument pushes further. He says governance is not a policy you write. It is a person you hire.

He calls the role an “agentic deployment expert.” The title is less important than the job description: someone who identifies which AI tools to deploy, configures them correctly, trains them on accurate data, and measures their output against business objectives.

Notice what this role is not. It is not a prompt engineer. It is not a developer. It is not even, primarily, a technologist. It is a governance role dressed in operational clothing. The person’s core competency is quality control across autonomous systems.

The Measurement That Matters

Lemkin proposes a hiring filter: ask any manager what commercial AI agent they deployed in the last 30 days that produced real ROI.

At the best startups crossing $100 million in revenue, maybe 30% of management can answer that question. In general interviews, single digits pass.

This is a useful metric because it tests for something specific. Not AI enthusiasm. Not prompt skill. Deployment competency. Did you select a tool, configure it, put it in front of users or customers, and measure whether it worked?

Most managers cannot answer because most organizations treat AI deployment as a technology initiative rather than an operational discipline. The CTO’s team evaluates tools. The IT team provisions access. Individual contributors experiment. Nobody owns the end-to-end lifecycle: selection, configuration, training, measurement, and ongoing quality assurance.

The CRO Who Wouldn’t Ask the Question

Lemkin describes a conversation with a Chief Revenue Officer who was hiring hundreds of new sales reps without once asking what that headcount could accomplish with a deployed AI BDR instead. His reaction: “I wanted to cry.”

The story resonates because it captures the failure mode precisely. The CRO was not hostile to AI. The CRO simply did not have deployment as a mental model. Hiring humans was the only growth lever in the toolkit. Adding AI agents as a force multiplier never entered the calculus.

This connects to the operational model shift we have documented: the transition from AI as a tool that assists individuals to AI as a workforce that requires direction, coordination, and oversight. The CRO was stuck in the first model. Lemkin is describing what it looks like when organizations fail to make the transition at the leadership level.

Why “Deploy and Forget” Is a Governance Failure

Return to the $6 billion company with the mispricing agent. The failure was not in the deployment. The agent was deployed successfully. It ran. It answered customer questions. It quoted prices. The failure was everything after deployment.

Nobody trained the agent on current pricing. Nobody monitored its outputs against actual price sheets. Nobody built a feedback loop between customer complaints and agent accuracy. Nobody owned it.

This pattern will define the next wave of AI failures. Not spectacular crashes. Quiet degradation. An agent trained on last quarter’s data. A chatbot referencing a discontinued product. A sales tool quoting terms that legal never approved. Each one small enough to miss. All of them compounding.

The fix is not better technology. The fix is someone whose job is to prevent exactly this. Someone who checks the training data monthly. Someone who audits outputs weekly. Someone who knows the difference between “deployed” and “governed.”

What This Role Actually Looks Like

Strip away the title and the role has four responsibilities:

Selection. Evaluating which AI tools solve real problems versus which ones generate demos. This requires business judgment, not technical judgment. The question is “does this agent replace a workflow that costs us money?” not “is this model architecture impressive?”

Configuration. Setting up the tool correctly. Connecting it to accurate data sources. Defining its boundaries. Telling it what it should not do, which matters as much as telling it what it should do.

Training. Not in the machine learning sense. In the operational sense. Feeding the agent accurate, current information. Testing its outputs before customers see them. Building a feedback mechanism so errors surface fast.

Measurement. Tracking whether the deployment produces the outcome it was hired to produce. Not usage metrics. Outcome metrics. Did the AI BDR book meetings that converted? Did the support agent resolve tickets that stayed resolved? Did the pricing tool quote prices that were correct?

Each of these is a governance function. None of them requires writing code.

The Implication for Organizations

If your AI strategy is “deploy tools and see what happens,” you have the same problem as the $6 billion company. You just have not discovered your mispricing agent yet.

The role Lemkin describes does not need to be a new headcount, though it could be. It needs to be an explicit accountability. Someone in the organization wakes up every morning responsible for whether the AI agents are doing what they are supposed to do, with accurate data, producing measurable results.

Most organizations have this accountability for every other system. Financial systems have controllers. IT systems have administrators. Legal documents have reviewers. AI agents, despite interacting directly with customers and making real business decisions, often have no one.

That is the hire you are not making.


This piece builds on Jason Lemkin’s SaaStr analysis of the “agentic deployment expert” (March 2026), which frames deployment governance as a hiring problem rather than a policy problem.

Victorino Group helps organizations build the governance structures that turn AI deployment into measurable business outcomes. If your agents are deployed but unowned, that is the problem to solve first. Let’s talk.

All articles on The Thinking Wire are written with the assistance of Anthropic's Opus LLM. Each piece goes through multi-agent research to verify facts and surface contradictions, followed by human review and approval before publication. If you find any inaccurate information or wish to contact our editorial team, please reach out at editorial@victorinollc.com . About The Thinking Wire →

If this resonates, let's talk

We help companies implement AI without losing control.

Schedule a Conversation