The Trump administration on Friday ordered all U.S. agencies to stop using Anthropic’s artificial intelligence technology and imposed other major penalties, culminating in an unusually public clash between the government and the company over AI safety.
U.S. President Donald Trump, U.S. Defence Secretary Pete Hegseth and other officials took to social media to chastise Anthropic for failing to allow the military unrestricted use of its AI technology by a Friday deadline, accusing it of endangering national security after CEO Dario Amodei refused to back down over concerns the company’s products could be used in ways that would violate its safeguards.
“We don’t need it, we don’t want it, and will not do business with them again!” Trump said on social media.
Hegseth also deemed the company a “supply chain risk,” a designation typically stamped on foreign adversaries that could derail the company’s critical partnerships with other businesses.
The company said that “designating Anthropic as a supply chain risk would be an unprecedented action — one historically reserved for US adversaries, never before publicly applied to an American company.”
Anthropic said the “designation would both be legally unsound and set a dangerous precedent for any American company that negotiates with the government.”
Company says its sought narrow assurances
Anthropic had said it sought narrow assurances from the Pentagon that its AI chatbot Claude would not be used for mass surveillance of Americans or in fully autonomous weapons. The Pentagon said it was not interested in such uses and would only deploy the technology in legal ways, but it also insisted on access without any limitations.
The government’s effort to assert dominance over the internal decision-making of the company comes amid a wider clash over AI’s role in national security and concerns about how increasingly capable machines could be used in high-stakes situations involving lethal force, sensitive information or government surveillance.
Trump said Anthropic made a mistake trying to strong-arm the Pentagon. He wrote on Truth Social that most agencies must immediately stop using Anthropic’s AI but gave the Pentagon a six-month period to phase out the technology that is already embedded in military platforms.
“The United States of America will never allow a radical left, woke company to dictate how our great military fights and wins wars!” he wrote in all caps.
After months of private talks exploded into public debate this week, Anthropic said Thursday that the government’s new contract language would allow “safeguards to be disregarded at will.” Amodei said his company “cannot in good conscience accede” to the demands.
Anthropic can afford to lose the contract. But the government’s actions posed broader risks at the peak of the company’s meteoric rise from a little-known computer science research lab in San Francisco to one of the world’s most valuable startups.
Crescendo of criticism
The president’s decision was preceded by hours of top Trump appointees from the Pentagon and the State Department taking to social media to criticize Anthropic, but their complaints posed contradictions.
Amid rapid global advances and deployment of artificial intelligence technologies, the federal government has invested millions to combine the minds of three existing institutes into one that can keep an eye on potential dangers ahead.
Sean Parnell, the Pentagon’s top spokesman, said on social media Thursday that Anthropic’s unwillingness to go along with the military’s demands was “jeopardizing critical military operations and potentially putting our warfighters at risk.”
Hegseth said Friday that the Pentagon “must have full, unrestricted access to Anthropic’s models for every LAWFUL purpose in defence of the Republic.”
Trump’s social media post also mandated the company “better get their act together, and be helpful” during a six-month phase-out period or there would be “major civil and criminal consequences to follow.”
However, Hegseth’s choice to designate Anthropic a supply chain risk uses an administrative tool that has been designed for companies owned by U.S. adversaries to prevent them from selling products that are harmful to American interests.
Senator has ‘serious concerns’
Virginia Sen. Mark Warner, the top Democrat on the Senate Intelligence Committee, noted that this dynamic, “combined with inflammatory rhetoric attacking that company, raises serious concerns about whether national security decisions are being driven by careful analysis or political considerations.”
Anthropic didn’t immediately reply to a request for comment on Trump’s remarks.
The dispute stunned AI developers in Silicon Valley, where venture capitalists, prominent AI scientists and a large number of workers from Anthropic’s top rivals — OpenAI and Google — voiced support for Amodei’s stand in open letters and other forums.
The move is likely to benefit Elon Musk’s competing chatbot, Grok, which the Pentagon plans to give access to classified military networks, and could serve as a warning to two other competitors, Google and OpenAI, that also have still-evolving contracts to supply their AI tools to the military.
Musk sided with Trump’s Republican administration, saying on his social media platform X that “Anthropic hates Western Civilization.”
But one of Amodei’s fiercest rivals, OpenAI CEO Sam Altman, sided with Anthropic and questioned the Pentagon’s “threatening” move in a CNBC interview and a letter to employees that said OpenAI shared the same red lines.
Amodei once worked for OpenAI before he and other OpenAI leaders quit to form Anthropic in 2021.
“For all the differences I have with Anthropic, I mostly trust them as a company, and I think they really do care about safety,” Altman told CNBC, hours before he gathered employees for an all-hands meeting Friday.
Retired Air Force Gen. Jack Shanahan, a former leader of the Pentagon’s AI initiatives, wrote on social media this week that “painting a bullseye on Anthropic garners spicy headlines, but everyone loses in the end.”
Shanahan said Claude is already being widely used across the government, including in classified settings, and Anthropic’s red lines were “reasonable.” He said the AI large language models that power chatbots like Claude, Grok and ChatGPT are also “not ready for prime time in national security settings,” particularly not for fully autonomous weapons.
More Stories
Why a memory chip shortage is wreaking havoc on the consumer electronics industry
China suspending some agricultural tariffs on Canada starting March 1
Canada’s economy contracted unexpectedly in fourth quarter of 2025