01 — ContextThe Partnership
On February 27, 2026, the Pentagon designated Anthropic a supply chain risk -- the first time the United States had ever applied that label to an American company. The designation ordered every federal agency to stop using Claude. The next day, as Operation Epic Fury launched against Iran, Claude reportedly helped identify over a thousand targets in the first twenty-four hours, according to unnamed military and intelligence sources.
Anthropic had entered the defense space deliberately. CEO Dario Amodei called the company "the most lean forward of all the AI companies in working with the U.S. government and working with the U.S. military." In November 2024, Anthropic partnered with Palantir Technologies and Amazon Web Services to bring Claude to classified government networks. In July 2025, Anthropic signed a $200 million Pentagon contract. Claude became the first frontier AI model approved for use on classified systems.
Claude was integrated into Palantir's Maven Smart System -- the military's core targeting platform, which fuses satellite imagery, drone feeds, radar data, and signals intelligence. Maven was formalized as an official program of record in March 2026, mandated across all military branches, with total investment growing from $480 million in 2024 to $13 billion. The Pentagon characterized Claude's role as "synthesizing documents" and "making logistics and supply chains more efficient." Multiple independent reports described a wider scope: target identification, strategic importance ranking, and strike package generation. No official has reconciled these characterizations.
Anthropic supported intelligence analysis, cyber operations, operational planning, and what it described as "lawful foreign intelligence and counterintelligence missions." The company accepted 98 or 99 percent of the military's use cases. It drew lines on two: no mass domestic surveillance of Americans, and no fully autonomous weapons systems capable of selecting and engaging targets without human involvement. These were the red lines.
The Red Line — A thin orange line bisects the canvas. Particles drift on both sides, doing identical work. The line flares with enforcement pulses, but particles cross through it freely, unaffected. The boundary persists but divides nothing.
02 — EscalationThe Raid and the Ultimatum
In January 2026, U.S. special operations forces captured Venezuelan President Nicolas Maduro in a raid. According to Axios and Semafor, citing unnamed sources, Claude was deployed operationally through the Palantir integration during the operation. After the raid, according to the same sources, a senior Anthropic executive contacted a senior Palantir executive to ask whether Claude had been used. The Palantir executive reported the exchange to the Pentagon, interpreting it as indicating Anthropic might disapprove of Claude's use in kinetic operations. Anthropic denied that it had discussed the use of Claude for specific operations with the Department of War -- a denial that did not address contact with Palantir.
The machinery of government swung fast. On February 23, Elon Musk's xAI signed an agreement granting the military access to Grok in classified systems under the "all lawful use" terms Anthropic had refused. The next day, Defense Secretary Pete Hegseth issued a 72-hour ultimatum to Amodei: allow unrestricted use of Claude "for all legal purposes" by 5:01 PM on February 27, or face consequences.
Anthropic refused publicly on February 26. The company released a statement calling the Pentagon's "final offer" a sham. "The contract language we received overnight from the Department of War made virtually no progress," Amodei said. "New language framed as compromise was paired with legalese that would allow those safeguards to be disregarded at will." He characterized the Pentagon's position as "inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security."
On February 27, Trump directed all federal agencies to cease using Anthropic. Hegseth designated Anthropic a supply chain risk -- the designation previously reserved for foreign adversaries like Huawei and Kaspersky. The cascade was immediate: every defense contractor, including Amazon, Microsoft, and Palantir, was required to certify that it did not use Claude in military work. Defense tech companies began dropping Claude. Amodei called the actions "retaliatory and punitive." Then, on March 4, Pentagon CTO Emil Michael emailed Amodei saying the two sides were "very close" on the disputed issues. Two days later, Michael publicly denied any active negotiation.
03 — The MechanismToo Embedded to Remove
The ban did not remove Claude from military operations. It could not. Pentagon CTO Emil Michael described the moment leadership realized the extent of the dependency: "I'm like, holy shit, what if this software went down, some guardrail picked up, some refusal happened for the next fight like this one and we left our people at risk?" He called it "a whoa moment for the whole leadership at the Pentagon."
Replacing Claude was not a matter of swapping one vendor for another. Defense One reported the Pentagon would need "three months or longer" to replace Claude's capabilities. Military Times reported that recertification of replacement systems on classified networks could take 12 to 18 months. Trump signed an executive order giving the military six months to complete the transition. According to an anonymous DOD IT contractor quoted by Military Times, some agencies planned to "slow-roll" the phase-out, betting on a resolution before the deadline expired.
The replacement dynamic compounded the problem. On the same day Operation Epic Fury launched, OpenAI announced a deal allowing military use of its technologies in classified settings. Sam Altman acknowledged the negotiations were "definitely rushed" and, after employee protests, conceded the initial agreement was "opportunistic and sloppy." OpenAI's deal stated its "two most important safety principles" were prohibitions on domestic mass surveillance and autonomous weapons -- the same restrictions Anthropic demanded. The difference: OpenAI relied on existing law rather than contractual prohibitions and agreed to the "any lawful purpose" language. The EFF called the terms "weasel words," noting that "intentionally" was load-bearing and key terms like "tracking," "surveillance," and "monitoring" were left undefined. The market punished the company that insisted on binding restrictions and rewarded the one that deferred to existing law.
04 — ConsequenceThe Ruling
On March 9, Anthropic filed two lawsuits -- one in the Northern District of California, one in the D.C. Circuit Court of Appeals -- alleging First Amendment retaliation, Fifth Amendment due process violations, and Administrative Procedure Act violations. The complaint called the government's actions "unprecedented and unlawful" and named the Department of Defense, Secretary Hegseth, and over a dozen other federal agencies as defendants. Anthropic alleged the Pentagon had skipped every mandatory procedural step Congress required for a supply chain risk designation: risk assessment, notification, opportunity to respond, written determination, congressional notification.
The breadth of support was unusual. Microsoft filed an amicus brief urging a "negotiated resolution." Nearly 50 Google and OpenAI employees, including DeepMind chief scientist Jeff Dean, filed a brief arguing the Pentagon had acted "recklessly." Nearly two dozen retired senior military officers, including former CIA, NSA, and DNI director Michael Hayden, filed in support. The EFF, FIRE, the Cato Institute, and the American Federation of Government Employees union all filed separately.
On March 24, Judge Rita F. Lin held a hearing in San Francisco. She called the Pentagon's actions "troubling" and said they "don't really seem to be tailored to the stated national security concern." She told the government's lawyers: "It looks like an attempt to cripple Anthropic." She noted that the government's position would allow designating any company a supply chain risk for being "stubborn" and asking "annoying questions." The government argued, through DOJ attorney Eric Hamilton, that Anthropic's negotiating stance created a sabotage risk -- that the company might deploy "kill switches" in future software updates. Anthropic's lawyer Michael Mongan countered that actual saboteurs would secretly accept terms, not argue in public.
Two days later, on March 26, Judge Lin issued a 43-page preliminary injunction blocking enforcement of the supply chain risk designation and the presidential directive. She found Anthropic was likely to succeed on the merits of its First Amendment retaliation claim, its due process claim, and its APA claim. Judge Lin cited an internal DOD memo stating the risk had escalated due to Anthropic's "increasingly hostile manner through the press." She cited Trump calling Anthropic a "radical left, woke company."
The ruling is preliminary. Judge Lin imposed a seven-day stay on her own order, giving the government time to appeal. Hours after the ruling, Emil Michael posted on X that the order contained "dozens of factual errors" and that the Section 4713 supply chain risk designation -- the FASCSA designation, distinct from the Section 3252 designation Lin enjoined -- "is in full force and effect." A separate case pending in the D.C. Circuit addresses the FASCSA designation independently. The legal fight is not over.
05 — SignalThe Precedent
The structural trap at the center of this case is not a legal technicality. It is a design problem. Anthropic built Claude to be the best military AI available. It succeeded. That success made its ethical restrictions impossible to enforce through market leverage and impossible to abandon through operational dependency. The government could not compel compliance because Anthropic refused. It could not punish noncompliance because it needed the product. And it could not replace the product because no replacement was ready.
Anthropic's competitors stepped into the gap with weaker safeguards. OpenAI's deal was, by its own CEO's admission, "rushed" and "sloppy." The market dynamic was clear: the company that insisted on contractual prohibitions against mass surveillance and autonomous weapons was designated a national security threat, while the companies that deferred to existing law were rewarded with contracts. Amodei framed it directly: "We are patriotic Americans. Everything we have done has been for the sake of this country." And: "Disagreeing with the government is the most American thing in the world."
Judge Lin's ruling identified the contradiction the government could not resolve: if Claude is too dangerous to trust, stop using it. The government did not stop using it. The red lines held -- but only because the AI was already too embedded to remove. The preliminary injunction has a seven-day stay. The D.C. Circuit case is pending. Emil Michael insists the FASCSA designation survives. The question this case leaves open is not whether Anthropic's red lines were right. It is what happens when the next company with ethical restrictions builds something even more indispensable -- and the next administration decides those restrictions are intolerable.
Judge Lin’s injunction holds for now. But the legal playbook is written. The next time an AI company draws a contractual red line — no autonomous targeting, no mass surveillance, no use in a specific theater — the Pentagon will not negotiate. It will designate. The supply chain risk label is a weapon that existed before Anthropic and will exist after. The FASCSA statute gives the government unilateral authority to ban any company from federal contracts based on a determination that its products pose a national security risk. The determination requires no judicial review, no adversarial hearing, no public evidence. The government lost this round because it skipped its own procedural steps and left a paper trail of political animus. A competent administration will not make those mistakes twice. It will follow the statute’s notice-and-respond framework, produce a classified risk assessment no court can review, and issue the designation with clean hands. The company will have thirty days to respond to allegations it cannot see, based on evidence it cannot examine, reviewed by officials who have already decided. And the next company will know this. That is the actual precedent. Not the injunction — the demonstration. Every AI company watched Anthropic lose $200 million in contracts, get labeled alongside Huawei, and spend months in federal court to win a preliminary ruling that may not survive appeal. The rational calculation for the next company with ethical objections is silence. Accept the terms. Defer to existing law. Let the Pentagon define what ‘lawful’ means. OpenAI already made that calculation. The market rewarded it. The structural result is a military AI ecosystem where the only companies left standing are the ones that never said no — and the institutional knowledge of how to say no, how to negotiate binding restrictions with the world’s largest customer, dies with the precedent that punished it. What happens when the next war requires an AI capability that no remaining contractor has the standing or the will to refuse?
Sources
- Anthropic v. Hegseth — Preliminary Injunction (N.D. Cal.)
- Axios — Anthropic-Palantir Maduro Exchange
- Semafor — Claude in Operation Epic Fury
- Defense One — Pentagon Replacement Timeline
- Military Times — Recertification Estimates
- TechCrunch — Emil Michael 'Very Close' Email
- EFF — OpenAI Military Deal Analysis