- Home
- Thinking...
- Don't Fire Your Team. Govern Your AI.
Don't Fire Your Team. Govern Your AI.
Last week, an article made the rounds on LinkedIn with the title “I fired my team and hired Claude Opus 4.6.” It was clickbait — the author admits as much — but it landed during a week when the provocation felt uncomfortably close to reality.
Between February 5th and 6th, 2026, Anthropic’s Claude Cowork launch helped wipe $285 billion off global SaaS market caps. The Goldman Sachs Software Index (IGV) fell 30% from its October 2025 highs. Thomson Reuters. LegalZoom. The stocks of companies whose entire value proposition is structured knowledge work. Financial media coined the term “SaaSpocalypse.”
So when someone writes “I fired my team and hired Claude,” the joke isn’t funny because it’s absurd. It’s funny because people are genuinely considering it.
This article is about why that’s the wrong question — and what the right one looks like.
What the Article Gets Right
Charlie Hills’ piece isn’t actually about firing anyone. It’s a setup guide for Claude Cowork, Skills, and Plugins, covering six workflows: content creation, sales prep, contract review, custom plugins, sales intelligence, and visual diagramming. Beneath the clickbait, there’s useful craft.
Three tips stand out:
Feed Claude real examples, not templates. Templates produce template-quality output. Feeding the model actual artifacts from your work — real emails you’ve sent, real proposals you’ve written — produces output that sounds like you, not like a chatbot. This is good advice.
Document your existing processes before automating them. The article suggests writing down how you actually do things before asking Claude to do them. This is more profound than it sounds. Most teams automate what they think they do, not what they actually do. The gap between those two things is where automation fails.
Skills as persistent instruction sets. Skills in Cowork are reusable playbooks that survive across sessions. Instead of re-explaining your content workflow every time, you encode it once. We’ve written extensively about this pattern in the context of Claude Code — it’s good to see it reaching a broader audience through Cowork.
These are legitimate contributions. If the article stopped here, it would be a solid practitioner guide.
It doesn’t stop here. And what it doesn’t say matters more than what it does.
The Missing Chapter
Claude Cowork shipped with known security vulnerabilities. Not theoretical ones. Documented, demonstrated, acknowledged-by-Anthropic vulnerabilities.
Security researchers identified what they call the “lethal trifecta” — a combination of three properties that makes Cowork uniquely risky:
- Access to private data. Cowork reads your files, emails, and documents. That’s the whole point.
- Exposure to untrusted content. When Cowork browses the web or processes documents from external sources, it ingests content it has no reason to trust.
- External communication capabilities. Cowork can send emails, upload files, and interact with external services.
Any two of these properties are manageable. All three together create an attack surface that is, to use the researchers’ term, lethal. A malicious document can inject instructions that cause Cowork to exfiltrate your private files to an attacker’s account — and this isn’t hypothetical. It has been demonstrated.
The numbers are sobering. Research published in early 2026 shows that 20% of jailbreak attempts against frontier models succeed within 42 seconds. Of those successful attacks, 90% result in data leaks. A technique called “Camouflage Attacks” — where adversarial instructions are hidden in seemingly benign content — achieved 65% success rates across eight different models.
Anthropic knows this. Their own safety documentation recommends avoiding granting Cowork access to sensitive documents like financial records and credentials. They advise watching for unexpected patterns — Claude accessing files you didn’t mention, or expanding its task scope beyond what you asked. They explicitly state that web content is the primary vector for prompt injection attacks.
And buried in the terms of service: users “remain responsible for all actions taken by Claude.”
Read that again. You are responsible for everything the AI does on your behalf, including things it does because an attacker tricked it.
The article about “firing your team” doesn’t mention any of this. Not a word about the lethal trifecta. Not a word about prompt injection. Not a word about who bears liability when things go wrong.
The Enterprise Reality
For our clients — mid-market companies in regulated industries — “fire your team and hire Claude” is not a strategy. It’s a liability event waiting to happen.
But the inverse is equally wrong. Ignoring Claude Cowork because of its risks is like ignoring email in 1998 because of spam. The capability is real. The productivity gains are real. The question is how you adopt it without creating exposure that exceeds the value.
Here’s what I see in practice: companies fall into one of two traps.
Trap one: Unrestricted adoption. Someone on the team discovers Cowork, starts feeding it client contracts, financial data, and internal strategy documents. Productivity spikes. No one asks what data is being processed, where it’s going, or what happens when the model misinterprets an instruction. This works until it doesn’t — and when it doesn’t, the failure mode is a data breach, not a typo.
Trap two: Blanket prohibition. Leadership hears about the security risks and bans all AI tools. Meanwhile, employees use personal accounts anyway, with zero governance and zero visibility. Shadow AI is the new shadow IT, except the attack surface is larger and the data exposure is worse.
Both traps share the same root cause: the absence of governance infrastructure.
Governance Is Not Bureaucracy
There’s a reflex in technology organizations to equate governance with bureaucracy. Approval forms. Committee meetings. Six-week review cycles. I understand the allergy.
That’s not what governance means here.
Governance means knowing the answers to three questions at all times:
- What data can the AI access? Not “what data does it theoretically have access to” but “what data boundaries have we explicitly defined and enforced?”
- What actions can the AI take? Can it send emails? Upload files? Modify production data? Each capability needs a deliberate yes or no.
- Who reviews what the AI did? Not before every action — that kills the value proposition. But systematically, after the fact, with audit trails that make review possible.
This isn’t a bureaucratic process. It’s infrastructure. The same way you don’t debate whether to use HTTPS every time you deploy a web application — you build it into the stack and move on.
A Graduated Adoption Framework
Here’s how we advise clients to adopt AI workflows like Cowork:
Tier 1: Low-Risk, High-Value (Start Here)
Content summarization. Meeting note cleanup. Research compilation. Internal FAQ generation. Document formatting.
These workflows share two properties: the data involved is low-sensitivity, and the output is reviewed by a human before it reaches anyone external. If the model hallucinates or misinterprets, the cost is a rewrite, not a breach.
Start here. Build comfort. Establish patterns.
Tier 2: Medium-Risk, Requires Boundaries
Sales prep. Competitive analysis. Draft communications. Internal reporting.
These workflows involve data that matters — client names, revenue figures, competitive intelligence — but the output stays internal or goes through human review. The governance requirement: explicit data boundaries (what folders and files the AI can access) and output review before external distribution.
Tier 3: High-Risk, Requires Controls
Contract review. Financial analysis. Compliance checking. Client-facing deliverables.
These workflows involve sensitive data and produce output with real consequences. The governance requirement: audit trails, permission scoping, human approval for irreversible actions, and regular review of what the AI accessed and produced.
The Hard Rule
No AI workflow should have the ability to take irreversible actions without human confirmation. Send an email to a client? Human confirms. Upload a file to an external system? Human confirms. Modify a financial record? Human confirms.
This isn’t because AI is unreliable. It’s because irreversible actions in any system — human or automated — deserve a confirmation step. We don’t let junior employees wire money without approval. The same principle applies to AI agents.
What Actually Changes
The article that triggered this essay describes six workflows. Every one of them is useful. Content creation with Skills. Sales prep with persistent memory. Contract review with extended thinking. These are real productivity gains.
But the article presents them as a replacement for human work. They’re not. They’re a reconfiguration of human work.
The person who used to spend four hours writing a sales brief now spends thirty minutes directing and reviewing one. The person who used to manually cross-reference contracts now validates AI-generated analyses. The work isn’t eliminated — it’s elevated from production to supervision.
This is the pattern that actually works in enterprise adoption: AI handles production, humans handle judgment. AI handles volume, humans handle stakes. AI handles the repeatable, humans handle the irreversible.
The Real Competition
The companies that will win in 2026 and beyond won’t be those that fired their teams fastest. They’ll be those that built governance infrastructure that let them adopt AI workflows confidently and at scale.
The SaaSpocalypse isn’t about AI replacing jobs. It’s about AI requiring a new kind of oversight — and the companies that build that oversight first will move faster than those still debating whether to adopt at all.
The $285 billion market correction happened because investors understood something that many operators haven’t yet internalized: structured knowledge work is being automated. The question isn’t whether your workflows will be affected. The question is whether you’ll have the governance in place to capture the value without absorbing the risk.
Don’t fire your team. Govern your AI. The team is how you govern it.
Sources
- Charlie Hills. “I fired my team and hired Claude Opus 4.6.” MarTech AI Substack, February 8, 2026.
- SaaSpocalypse market impact: FinancialContent, CNBC, Entrepreneur, CFOsTimes. February 5-6, 2026.
- “Lethal trifecta” and Cowork security analysis: GovInfoSecurity, The Register, Security Boulevard, ByteIota. 2026.
- Jailbreak success rates and Camouflage Attacks: Security Boulevard, peer-reviewed research. 2026.
- Anthropic safety recommendations and terms of service: anthropic.com. 2026.
At Victorino Group, we help companies adopt AI workflows with governance that makes automation trustworthy. If you’re navigating this transition, contact us.
If this resonates, let's talk
We help companies implement AI without losing control.
Schedule a Conversation