Anthropic said Thursday that the Defense Department has designated it a threat to national security, a striking move that bans it from doing business with the U.S. military and could send shock waves through America’s AI industry.
The designation, which the company said it received Wednesday and specifically labels Anthropic a “supply-chain risk to national security,” requires the Pentagon and its contractors to stop using Anthropic’s AI services for all defense business.
Defense Secretary Pete Hegseth telegraphed the move Friday evening on X.
It comes after months of increasingly tense negotiations over how the military should be able to use Anthropic’s Claude AI systems. Though it is a relatively new technology, generative AI models like Claude have quickly been embraced by the Trump administration, including for military use.
For the past several months, the Pentagon has been negotiating new contract terms with Anthropic, along with other leading American AI companies, to allow more expansive military use of AI. While the Pentagon has sought to harness powerful AI systems for “any lawful use,” Anthropic CEO Dario Amodei had wanted stronger guarantees the Pentagon would not use its AI technology for deadly autonomous weapons or mass domestic surveillance.
Amodei confirmed the supply chain risk label in a statement Thursday night and said the company did not agree with it, writing “we do not believe this action is legally sound, and we see no choice but to challenge it in court.”
“Anthropic has much more in common with the Department of War than we have differences,” he wrote in the statement. “We both are committed to advancing US national security and defending the American people, and agree on the urgency of applying AI across the government. All our future decisions will flow from that shared premise.”
Other AI systems move in
Until last week, Anthropic was the only AI company whose services were cleared for use on the Defense Department’s classified networks. Hours after Hegseth announced he would seek to label Anthropic as a supply chain risk last week, OpenAI CEO Sam Altman announced his company had reached a new agreement with the Pentagon to use OpenAI’s services in classified settings, which could allow OpenAI to replace much of Anthropic’s current business with the Pentagon.
Elon Musk’s xAI and its Grok AI systems also struck a deal with the Pentagon last week to be cleared for use on classified networks.
In the statement on its website Thursday evening, Amodei emphasized that the ban on Anthropic’s business with the military did not apply to contracts with military suppliers for non-defense-related purposes. Anthropic has extensive business deals with many of America’s leading tech companies, including Amazon and Microsoft, many of which also have large contracts with the Pentagon.
A senior Defense Department official confirmed the supply chain risk determination was effective immediately. “From the very beginning,” the official told NBC News on Thursday, “this has been about one fundamental principle: the military being able to use technology for all lawful purposes. The military will not allow a vendor to insert itself into the chain of command by restricting the lawful use of a critical capability and put our warfighters at risk.”
Hegseth wrote in his post announcing the move last Friday that Anthropic would “continue to provide the Department of War its services for a period of no more than six months to allow for a seamless transition to a better and more patriotic service.”
“Anthropic delivered a master class in arrogance and betrayal as well as a textbook case of how not to do business with the United States Government or the Pentagon,” he said in the post. “Our position has never wavered and will never waver: the Department of War must have full, unrestricted access to Anthropic’s models for every LAWFUL purpose in defense of the Republic.”
Industry concerns over label
In a statement last week during the strained contract negotiations and before Hegseth announced the move, Amodei noted that the supply chain risk label, usually reserved for foreign adversaries and associated businesses, had “never before applied to an American company.”
Several legal observers have said the designation would not be likely to hold up legally and is meant instead to warn other companies to toe the Pentagon’s line.
The mere threat of such a designation already roiled Washington and the tech industry. Fearing fallout from the potential supply chain risk decision, defense experts, Anthropic rival OpenAI and members of Congress had tried to cool tensions between Anthropic and the Pentagon throughout this week.
An influential tech advocacy group, whose members include Nvidia and Apple, sent a letter to Hegseth on Wednesday urging him to refrain from officially applying the supply chain risk label.
Many industry investors fear that by targeting one of America’s largest and most successful AI companies, the Defense Department is setting a dangerous precedent that will scare away investment and chill America’s AI industry.
Last Friday, just over an hour before a 5 p.m. ET deadline to reach an agreement Hegseth set earlier in the week, President Donald Trump said he would move to bar Anthropic from other federal agencies.
“The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution,” Trump wrote.
The Pentagon already uses Anthropic’s Claude systems as part of an agreement with the data analytics company Palantir. According to recent reports from The Washington Post and The Wall Street Journal, Anthropic’s AI systems have been used to help forces assess intelligence and identify targets in the ongoing war in Iran. NBC News has not confirmed those reports.
Anthropic struck its first deal with Palantir in 2024, allowing the Defense Department to use its services on classified networks, and it was awarded another $200 million contract in July to further “prototype frontier AI capabilities that advance U.S. national security.”
In earlier rounds of negotiation, Anthropic agreed to let the Pentagon use its AI systems for cyber and missile defense purposes.
‘Essentially no gain’
Some experts noted an apparent disconnect between labeling one of America’s largest AI companies a supply chain risk to national security while refraining from applying the same label to DeepSeek, a leading Chinese AI company that has been accused of unfair practices. DeepSeek did not respond at the time to requests from several news organizations for comment on the issue.
“We’re treating an American AI company worse than we’re treating a Chinese Communist Party-controlled AI company,” said Michael Sobolik, an expert on AI and China issues and senior fellow at the Hudson Institute. “We cannot hobble the most innovative, successful American companies for asking quintessentially American questions about military use and privacy.
“The U.S. government risks cutting off the legs of one of our best AI companies in the early years of this AI race,” Sobolik continued. “If we do that, where America’s frontier models are qualitatively and quantitatively better than China’s, it does seem like cutting off our nose to spite our own face.”
Tim Fist, director of emerging technology at the Washington-based Institute for Progress think tank, said the new designation would be counterproductive for America’s AI aspirations.
“The supply chain risk designation, normally used on foreign adversaries, is both hurting one of America’s top AI companies and making other companies much more hesitant to work with the federal government,” Fist said in written comments. “The designation hurts the AI industry and thus US national security for essentially no gain.”