The White House has ordered all federal agencies to cease using artificial intelligence technology from Anthropic, a leading American AI firm. The directive came Friday after a breakdown in negotiations with the Pentagon over the military's use of the company's powerful AI model, Claude.
In a related move, Defense Secretary Pete Hegseth designated Anthropic a “supply-chain risk to national security,” a label historically applied to foreign adversaries. The decision effectively bars any military contractor from doing business with the company and sets the stage for a significant legal and political battle with wide-ranging implications for national security and the U.S. tech industry.
Key Takeaways
- President Trump ordered all federal agencies to stop using Anthropic's AI technology.
- The Pentagon designated Anthropic a “supply-chain risk to national security,” an unprecedented move for a U.S. company.
- The actions followed a contract dispute where Anthropic refused to grant the military unfettered access to its AI model without certain safeguards.
- Anthropic has announced its intention to challenge the designation in court, calling it legally unsound.
- The decision could disrupt critical intelligence analysis at agencies like the NSA and CIA, which rely on Anthropic's technology.
A Sudden Escalation
The conflict between the U.S. government and the San Francisco-based AI company escalated rapidly on Friday. President Trump announced the directive on his social media platform, Truth Social, describing Anthropic as a “radical Left AI company.”
His post accused the company of attempting to “strong-arm” the Pentagon. “They made a mistake,” he wrote, while also announcing a “Six Month phase out” for the Pentagon and other agencies, potentially leaving a window for further negotiation.
Just minutes after a 5:01 p.m. deadline set by the Pentagon, Defense Secretary Pete Hegseth formalized the administration's stance. He declared Anthropic a national security risk, a move that legal experts say is highly unusual for a domestic firm. Anthropic officials were reportedly taken by surprise by the President's announcement, which occurred while discussions were still underway.
What is a Supply-Chain Risk Designation?
This designation is a powerful tool typically used by the U.S. government to block federal agencies and their contractors from using technology from foreign companies deemed a threat to national security. Applying it to a U.S.-based company is considered an extraordinary step that could have severe financial and reputational consequences for Anthropic.
The Heart of the Dispute
The core of the disagreement is a contract dispute over the use of Anthropic's advanced AI model, known as Claude. The Pentagon demanded the ability to use the technology however it sees fit, as long as its use complies with the law. Anthropic, however, insisted on implementing safeguards to prevent its AI from being used for applications it deems unsafe or unethical.
“In a narrow set of cases, we believe A.I. can undermine, rather than defend, democratic values,” Anthropic CEO Dario Amodei said in a statement. He added that “some uses are also simply outside the bounds of what today’s technology can safely and reliably do.”
The company specifically sought to prevent Claude from being used for mass surveillance or fully autonomous weaponry. While the Pentagon offered assurances it had no interest in such uses, Anthropic argued the legal language in the proposed contract did not provide sufficient guarantees.
The dispute quickly became politicized, with Pentagon officials publicly criticizing the company's leadership. Emil Michael, a top Pentagon official overseeing AI, accused Amodei of having a “God-complex” and putting national safety at risk in a social media post on Thursday.
Widespread Condemnation and Support
The administration's move drew swift condemnation from Democratic lawmakers and technology policy experts. Senator Mark Warner, the top Democrat on the Intelligence Committee, expressed alarm over the decision.
“The president’s directive to halt the use of a leading American A.I. company across the federal government, combined with inflammatory rhetoric attacking that company, raises serious concerns about whether national security decisions are being driven by careful analysis or political considerations,” Warner stated.
Silicon Valley Solidarity
In a rare show of unity, nearly 50 employees from rival AI company OpenAI and 175 from Google signed letters supporting Anthropic’s position. The letter criticized the Pentagon's negotiating tactics and urged tech companies to stand together against the government's demands.
Experts warned of a chilling effect on government-business relations. Dean Ball, a former White House AI adviser for the Trump administration, called it a “dark day in the history of American business.” He argued the designation sends a terrible message to investors and represents the “most aggressive government regulation of A.I. ever taken anywhere in the world.”
Impact on National Security
While the political and legal battles unfold, the immediate consequences for U.S. intelligence and defense operations are significant. AI models like Claude are not just theoretical tools; they are actively used for critical tasks.
Intelligence analysts at agencies such as the National Security Agency (NSA) and the Central Intelligence Agency (CIA) rely on Claude to sift through vast amounts of data, such as overseas communications intercepts and intelligence reports. Former officials have noted that the technology has substantially sped up their work and deepened their analytical capabilities.
Key Government Functions Affected:
- Intelligence Analysis: Sifting through intercepted communications and global intelligence reports to identify patterns and threats.
- Data Processing: Managing and analyzing massive datasets far faster than human analysts can.
- Operational Planning: Assisting in the developmental stages of military operations.
Forcing these agencies to remove Claude from their systems could cause immediate disruption. While the Pentagon has indicated it is prepared to use Grok, an AI model from Elon Musk’s xAI, it is widely considered by government officials to be an inferior product. The process of switching to a new AI system would be time-consuming and could create vulnerabilities in intelligence gathering.
Anthropic has vowed to fight the designation. “Designating Anthropic as a supply chain risk would be an unprecedented action—one historically reserved for US adversaries, never before publicly applied to an American company,” the company said in a statement. “We believe this designation would both be legally unsound and set a dangerous precedent.”





