The Pentagon has issued an ultimatum to artificial intelligence company Anthropic, demanding it remove certain safety guardrails from its AI model, Claude. The company has been given until Friday to comply or risk the termination of a $200 million contract and face potential government sanctions, including being declared a supply chain risk.
The dispute centers on the U.S. military's desire for unrestricted use of the AI for all lawful purposes. However, Anthropic is holding firm on its ethical policies, specifically refusing to allow its technology to be used for autonomous weapons systems or mass domestic surveillance of American citizens, setting the stage for a high-stakes confrontation between a leading AI developer and the Department of Defense.
Key Takeaways
- The Pentagon has demanded Anthropic remove safety restrictions from its AI model, Claude, for military use.
- Anthropic is refusing to lift safeguards related to autonomous weapons and mass domestic surveillance.
- A Friday deadline has been set, after which the Pentagon may cancel a $200 million contract.
- Defense Secretary Pete Hegseth has threatened to invoke the Defense Production Act and label Anthropic a "supply chain risk."
An Ultimatum Over AI Ethics
A meeting on Tuesday between Anthropic CEO Dario Amodei and Defense Secretary Pete Hegseth brought a months-long negotiation to a critical point. While sources described the conversation as respectful, the outcome was an impasse over fundamental principles of AI deployment.
The Department of Defense seeks to utilize Anthropic's advanced AI for what it terms “all lawful use,” a broad mandate that the company is not prepared to accept unconditionally. At the heart of Anthropic's resistance are two specific applications it considers to be ethical red lines.
First, the company has expressed serious concerns about using its AI to control autonomous weapons. Sources familiar with Anthropic's position state the company believes current AI technology is not reliable enough for such life-or-death decisions. Second, Anthropic is unwilling to remove safeguards that prevent its AI from being used in mass surveillance operations targeting U.S. citizens, citing the lack of a legal and regulatory framework for such activities.
Who is Anthropic?
Anthropic was founded by former employees of OpenAI who reportedly left due to disagreements over the company's approach to AI safety. It has consistently positioned itself as a developer focused on the ethical and responsible creation of artificial intelligence. The company recently committed $20 million to a political group advocating for greater AI regulation.
Pentagon's Hardline Stance
The Department of Defense has responded to Anthropic's position with a series of significant threats. A Pentagon official confirmed the company has until 5:01 p.m. on Friday to agree to the military's terms.
If the deadline passes without an agreement, the existing $200 million contract is expected to be terminated. More significantly, Secretary Hegseth has indicated he is prepared to take further action.
“The Defense Production Act is invoked on Anthropic, compelling them to be used by the Pentagon regardless of if they want to or not,” a Pentagon official stated, outlining one potential consequence.
The official also confirmed that Hegseth would label Anthropic a supply chain risk. This designation is typically reserved for companies with ties to foreign adversaries and would effectively blacklist Anthropic from the defense industry, prohibiting any company with a military contract from using its technology.
What is the Defense Production Act?
The Defense Production Act (DPA) is a U.S. federal law enacted in 1950 that grants the President broad authority to influence domestic industry in the interest of national defense. It allows the government to compel businesses to accept and prioritize contracts for materials and services deemed necessary for national security.
Conflicting Views and Legal Questions
The Pentagon's dual threat to both compel Anthropic's cooperation and simultaneously label it a security risk has raised questions among legal experts. Katie Sweeten, a former Justice Department liaison to the Department of Defense, expressed confusion over the strategy.
“I would assume we don’t want to utilize the technology that is the supply chain risk, right? So I don’t know how you square that,” Sweeten commented. She suggested the supply chain risk designation may be intended as a punitive measure rather than a legitimate security claim.
A Pentagon official pushed back on Anthropic's characterization of the dispute, stating the issue has “nothing to do with mass surveillance and autonomous weapons being used.” The official added, “Legality is the Pentagon’s responsibility as the end user.”
For its part, Anthropic has maintained a diplomatic tone. In a statement, a spokesperson described the meeting as a "good-faith" conversation and affirmed the company's commitment to national security.
“We continued good-faith conversations about our usage policy to ensure Anthropic can continue to support the government’s national security mission in line with what our models can reliably and responsibly do,” the statement read.
The Broader Implications
This conflict highlights the growing tension between the rapid advancement of AI technology and the slower pace of ethical and legal frameworks to govern its use. Anthropic's stand could set a precedent for how AI companies navigate partnerships with military and intelligence agencies.
The outcome could also reshape the competitive landscape. A Pentagon official noted that other AI firms, including Elon Musk’s xAI, are “on board with being in a classified setting,” suggesting the military has alternative partners ready to step in if Anthropic does not yield.
As the Friday deadline approaches, the technology and defense sectors are watching closely. The resolution of this dispute will have significant implications for the future of AI in national security and the ethical responsibilities of its creators.





