- Home
- The Thinking Wire
- AI-Native Operating Maturity: Hiring, PM, and Support Get Rebuilt at Once
AI-Native Operating Maturity: Hiring, PM, and Support Get Rebuilt at Once
In the same week, three different organizational functions published what their post-AI hygiene now looks like. Sierra rewrote the engineering interview around plan, build, demo, with the candidate’s AI tools of choice and zero syntax screening. Marcus Moretti at Every published an agent-native PM playbook where the conversation is the work, not a precursor to it. Jason Lemkin at SaaStr called out the AI vendors whose own customer support is missing or theatrical.
None of these three pieces references the others. They form a single argument anyway.
AI-native operating maturity is not a single program a CTO runs. It is a concurrent rebuild of how each function recruits, plans, ships, and answers the phone. The companies that have rebuilt look fundamentally different from the companies that haven’t, and the difference is now visible from outside the building. That visibility turns operating maturity into a procurement signal you can read in 30 seconds.
This is the architectural review your operating leaders should run this month, one function at a time.
Function One: Hiring Stops Screening for Yesterday
Sierra’s interview redesign, authored by Vijay Iyengar, Arya Asemanfar, and Angie Wang, has three stages. Stage one is plan: the candidate walks into a problem the way a senior engineer walks into a real ticket, asking clarifying questions, naming assumptions, sketching an approach. Stage two is build: a two-hour solo session, the candidate’s AI tools of choice, the candidate’s editor, the candidate’s keyboard shortcuts. Stage three is demo: walk us through what you built, where you got stuck, and what you would do next.
What is gone is the syntax test. The whiteboard tree traversal. The trick question whose answer the candidate either has memorized or doesn’t.
What replaced the syntax test is a question Iyengar’s team frames explicitly: “Where would this person thrive, and how do we support them?” Not “where are the weaknesses we screen out for.” That reframe is the actual change. The interview format is downstream of it.
Most engineering organizations still interview for the 2018 job. They evaluate whether a candidate can produce, from memory, a function the candidate’s IDE will produce in 0.4 seconds with Tab. The interview is testing for capability the role no longer requires while ignoring capability the role now demands: judgment about which tools to reach for, taste about which generated code to accept, the ability to demo and defend an approach to a skeptical room. We argued earlier in this arc that the ROI deficit shows up first in the org chart, not in the model. Hiring is one of the org chart muscles that has to rebuild before the ROI math gets unstuck.
If your engineering interview still includes “implement reverse-a-linked-list without an IDE,” your hiring loop is screening out the candidates who are best at the job you actually have.
Function Two: PM Becomes a Conversation, Not a Document
Moretti’s piece on agent-native product management opens with a sentence worth pinning to a wall: “The conversation is the work.” The PM does not write a brief, hand it to a designer, hand it to engineering, then wait for a build. The PM iterates with agents in real time. Plan, ship, review, repeat, on a cycle that used to take three weeks and now takes an afternoon.
The structural change is visible in the time math. Moretti gives the example of a three-hour analytics investigation that an agent-native PM now finishes in minutes. The investigation is not the artifact. The decision the investigation enables is the artifact. Compressing the investigation by two orders of magnitude does not just save time. It changes which decisions are worth making. Decisions that used to require a meeting because the investigation was too expensive can now be made between standups.
What does not change is the rigor. Moretti pulls his strategy frame from Richard Rumelt: name the target problem, name the approach, name the personas, define three to five SMART metrics, draw two to four work tracks. The PM is not generating more output. The PM is making more decisions per week, with the same skeleton holding it together.
The failure mode is obvious. PMs who treat agents as faster Jira ticket writers will produce more tickets faster, and the org will drown. PMs who treat agents as conversation partners and use the time savings to make harder decisions will leave the ticket-writers behind. We described an AI-native team shape where the role is operator of intelligence, not author of artifacts. PM is the role where this shift is most legible. If your PMs spend the same percentage of the week writing PRDs as they did in 2023, the role has not actually moved.
Function Three: Support Is the Tell
Lemkin’s piece on AI vendor support is the cruelest of the three. He sorts vendors into four tiers. Tier one: no support page exists. Tier two: an automation that abandoned itself, where the chatbot times out, the email bounces, the help center has 12 articles all written in 2024. Tier three: a black-hole form, where the customer types into a textarea, clicks submit, and never hears from a human. Tier four: actual support, which Lemkin calls “rare. Almost always enterprise-focused.”
The brutal line: “The cost of no support isn’t the support tickets you don’t answer. It’s churn you can’t see.”
Three out of four tiers, the AI vendor in question is selling automation it cannot operate for itself. The pitch deck claims to deflect 80% of customer contacts. The vendor’s own customer contacts deflect 100% of themselves into a void. This is not a small irony. It is the loudest possible tell that the vendor’s product has not been operationalized inside the vendor’s own walls.
Operating maturity now reads from the outside. A 30-second test on any AI vendor you are considering: try to contact them as a paying customer. If the contact path is a form with no SLA, a chatbot that loops, or no path at all, you are buying technology from a company that has not built the muscle to support it. That muscle is what you will need when the technology hits an edge case in your environment. You will need it next month, not next year, because edge cases in agent-driven products surface faster than edge cases in human-driven ones.
This is not a service-quality complaint. It is a procurement signal. A vendor that cannot operate support is a vendor whose product has not crossed the chasm from demo to operation.
The Single Pattern
Sierra rebuilt hiring. Moretti rebuilt PM. Lemkin diagnosed support. Each piece would stand on its own. Together they describe a transition every operating function is going through, on its own clock, with its own playbook.
The shape of the transition is the same in each case. The function used to optimize for an artifact: a passing interview, a written PRD, a tier-3 ticket queue. The function now optimizes for a decision rate: hires per quarter where the new hire is operating productively in week three, decisions per week made by a PM who can investigate in minutes, customer issues resolved per week with the customer still present. The artifact was a proxy for the decision. AI removes the cost of the proxy. Functions that still measure the proxy are measuring 2023.
What this means for operators is unromantic. You do not get to rebuild your operating model in one program. You rebuild it function by function, and each function requires its own playbook, its own metrics, and its own evidence that the rebuild has happened. Hiring without the new interview format is not modernized hiring. PM without the conversation-as-work cycle is not modernized PM. Support without an SLA is not support, regardless of what the website says.
The companies that get this right will look obviously different in twelve months. The companies that get it wrong will look the same, with a higher AI bill, and an attrition rate that nobody can explain. We described this dynamic in the broader AI-native organization argument: the form of the company changes, not the headcount. The three pieces this week are evidence the form is changing in three places at once.
What to Do This Week
Pick the function closest to you. Then ask three questions of it.
What artifact does this function still optimize for that AI has made cheap? Identify it. Reverse the optimization. The artifact is no longer the work.
What decision rate would prove the function has rebuilt? Pick a specific number. Hires onboarded in week three. PM decisions per week with documented metrics. Support tickets resolved within a defined SLA. Track it weekly.
What would an outside observer see, in 30 seconds, that proves the rebuild? Sierra’s interview format is visible to candidates. Moretti’s PM cycle is visible in shipped product cadence. Lemkin’s support test is visible to anyone with a contact form. Operating maturity that is invisible from outside the building is operating maturity nobody is buying.
Three functions published their playbooks in one week. The pattern they describe is not three trends. It is one trend that arrived in three uniforms. The work is to walk into your own building and find which functions have rebuilt, which are pretending, and which have not started.
This analysis synthesizes The AI-Native Interview (Sierra, April 2026), A Guide to Agent-Native Product Management (Every / Marcus Moretti, 2026), and Why Does No One in AI Have Support? (SaaStr / Jason Lemkin, May 2026).
Victorino Group helps leaders treat AI-native operating maturity as a procurement signal, not a vibe. Let’s talk.
All articles on The Thinking Wire are written with the assistance of Anthropic's Opus LLM. Each piece goes through multi-agent research to verify facts and surface contradictions, followed by human review and approval before publication. If you find any inaccurate information or wish to contact our editorial team, please reach out at editorial@victorinollc.com . About The Thinking Wire →
If this resonates, let's talk
We help companies implement AI without losing control.
Schedule a Conversation