- Home
- Thinking...
- Anthropic and the Pentagon: When AI Safety Becomes a Supply Chain Risk
Anthropic and the Pentagon: When AI Safety Becomes a Supply Chain Risk
Defense Secretary Pete Hegseth is close to designating Anthropic a “supply chain risk.” That designation is normally reserved for foreign adversaries like Huawei or Kaspersky. The target here is a San Francisco AI company whose principal offense is insisting its technology should not be used for mass domestic surveillance or fully autonomous weapons.
This is not a defense procurement story. It is a governance story that affects every enterprise evaluating AI vendors.
What Actually Happened
The facts, assembled from Axios, the New York Times, the Wall Street Journal, and Fast Company reporting between February 13 and 19, 2026:
Anthropic’s Claude is the only frontier AI model deployed on the Pentagon’s classified networks, integrated through a partnership with Palantir. The contract is valued at up to $200 million. Last summer, the Defense Department awarded the deal. In January 2026, Claude was used during the U.S. military operation to capture Venezuelan President Nicolás Maduro.
After the raid, tensions escalated. The Wall Street Journal reported that Anthropic employees raised concerns with Palantir about the role Claude played. Anthropic denies any such outreach. The Pentagon claims it happened.
Now the Pentagon is demanding that all four frontier AI labs — Anthropic, OpenAI, Google, and xAI — agree to let the military use their models for “all lawful purposes.” Anthropic is willing to loosen its terms but wants to maintain two restrictions: no mass domestic surveillance, and no fully autonomous weapons. The Pentagon says those conditions are unworkable.
A senior Pentagon official told Axios: “It will be an enormous pain in the ass to disentangle, and we are going to make sure they pay a price for forcing our hand like this.”
The Supply Chain Designation as Governance Weapon
The term “supply chain risk” has a specific legal and regulatory meaning in federal procurement. Under the Federal Acquisition Regulation, a supply chain risk designation requires every company doing business with the Pentagon to certify it does not rely on the designated entity in its operations.
Anthropic claims eight of the ten largest U.S. companies use Claude. Even discounting that as a self-reported figure from a funding announcement, Claude is embedded in enterprise software stacks across industries. A supply chain risk designation would force defense contractors to audit their entire technology stack for Claude usage — a compliance exercise of enormous scope.
Alex Bores, a former Palantir employee now running for Congress, described the practical consequence: “The vast majority of companies that now use Claude would all of a sudden be ineligible for working for the government.”
This is unprecedented. Supply chain designations target foreign adversaries who pose intelligence or sabotage risks. Using this mechanism against a domestic company over a policy disagreement about AI safety terms transforms a procurement tool into a governance weapon.
The “All Lawful Purposes” Problem
The Pentagon’s demand sounds reasonable in isolation. “All lawful purposes” implies nothing illegal happens. The problem is that “lawful” in the context of AI-enabled surveillance and autonomous weapons is poorly defined.
Existing mass surveillance law was written before AI made it possible to process, correlate, and act on population-scale data in real time. The Pentagon already has legal authority to collect social media posts, concealed carry permits, and other data about American citizens. AI supercharges that authority into something qualitatively different from what the law originally contemplated.
When the Pentagon asks for “all lawful purposes” access to an AI model, it is asking for a blank check denominated in a currency whose value has not been established. The law has not caught up to the capability. “Lawful” is undefined in the relevant sense.
Anthropic’s position — yes to military use, no to mass surveillance and fully autonomous weapons — is an attempt to draw governance lines that the law has not yet drawn. The Pentagon’s position is that Silicon Valley executives should not be drawing those lines at all.
Both positions have merit. Neither resolves the underlying problem: there is no governance framework adequate to the capability being deployed.
The Competitive Landscape Tells the Governance Story
The other three labs in this negotiation reveal how competitive dynamics interact with governance choices.
xAI, founded by Elon Musk, has reportedly told the Pentagon it accepts “all lawful use” at any classification level. xAI was the only frontier lab to bid in the Pentagon’s autonomous drone software contest. Musk has direct political access to the current administration and has publicly criticized rivals’ safety commitments as “woke.”
OpenAI has removed its ordinary safeguards for unclassified military systems but is bidding only for limited applications — voice-to-digital translation, not drone control or weapon integration. An OpenAI spokesperson told Axios that classified work “would require us to agree to a new or modified agreement.”
Google has also lifted safeguards for unclassified use but has not commented on classified work. Google has institutional memory of the 2018 Project Maven revolt, when engineers protested AI for drone footage analysis and the company walked away from the contract after a damaging internal fight.
A senior administration official acknowledged that the public fight with Anthropic was a “useful way to set the tone” for negotiations with the other three labs. This is negotiation by public example. The message to OpenAI and Google: comply fully, or watch what happens to the company that did not.
Palantir: The Infrastructure Dependency No One Is Discussing
Caught in the middle is Palantir, the defense contractor that provides the secure cloud infrastructure connecting Claude to classified Pentagon networks.
Palantir has stayed quiet as tensions escalate. This silence is strategic but unsustainable. A supply chain risk designation would force Palantir to sever its relationship with Anthropic — one of its most important AI partnerships. The infrastructure dependency runs in both directions: Palantir needs Claude’s capabilities, and Anthropic needs Palantir’s classified network access.
This is a single point of governance failure. When one infrastructure provider mediates between an AI vendor and the world’s largest military customer, that provider becomes the nexus of every governance tension between the two. Palantir’s position is untenable not because of anything Palantir did, but because the governance framework does not account for infrastructure intermediaries in AI deployment.
What This Means for Enterprise AI Governance
If you run enterprise AI procurement, this dispute introduces a new category of vendor risk that did not exist six months ago.
Safety commitments are now procurement liabilities. Anthropic built its brand on responsible AI development. Enterprise buyers valued that positioning. The Pentagon dispute demonstrates that the same safety commitments can trigger government retaliation that cascades through the supply chain. A vendor’s ethical stance is no longer a pure positive in procurement evaluation — it is a variable that interacts with government relationships in unpredictable ways.
Multi-vendor strategy is no longer optional. Any organization with government contracts or government-adjacent customers cannot afford dependency on a single AI vendor. The supply chain risk designation mechanism means that a single vendor’s government relationship can disqualify your entire technology stack. Governance requires redundancy.
Vendor evaluation now has a geopolitical dimension. Enterprise AI procurement has traditionally evaluated vendors on model performance, cost, security, and compliance. Add to that list: vendor-government relationship status, political exposure of vendor leadership, and risk of regulatory retaliation. These are not technical evaluations. They require a different competency in the procurement team.
The “all lawful purposes” standard will spread. If the Pentagon establishes this as the baseline for AI vendor relationships, other government agencies will follow. The Department of Homeland Security, the intelligence community, state and local law enforcement — all will adopt similar language. Every enterprise selling to government needs to understand what this standard means for their AI vendor choices.
The Question No One Is Answering
One source familiar with the negotiations told Axios something that deserves more attention than it received: “If there’s a one in a million chance that the model might do something unpredictable, is that one in a million chance so catastrophic that it’s not worth taking?”
This is the core governance question. Not whether AI should be used by the military. Not whether safety restrictions are appropriate. The question is: who decides the acceptable risk threshold for AI systems whose behavior is not fully predictable, deployed in contexts where errors can be lethal?
The Pentagon says the military decides. Anthropic says the developers have a responsibility too. The governance framework says nothing, because it does not exist yet.
When the Pentagon considers your AI safety policy a supply chain risk, the governance conversation has moved from engineering to geopolitics. Every enterprise AI procurement now has a new dimension.
The organizations that build governance architectures capable of navigating this dimension will maintain their ability to operate across sectors. The ones that treat AI vendor selection as a purely technical decision are exposed to a category of risk they have not yet priced.
Sources: Axios (Feb 16, 19, 2026), New York Times (Feb 18, 2026), Fast Company (Feb 17, 2026), Wall Street Journal (Feb 2026). Anthropic’s revenue and customer claims are self-reported from its Series G funding announcement.
For governance assessment and AI vendor risk evaluation: contact@victorinollc.com | www.victorinollc.com
If this resonates, let's talk
We help companies implement AI without losing control.
Schedule a Conversation