Anthropic CEO Dario Amodei has issued a public statement to address what he describes as "inaccurate claims" regarding the company's policy positions, its extensive work with the U.S. government, and accusations of political bias in its AI models. The statement emphasizes the company's commitment to ensuring American leadership in AI while navigating the complex landscape of regulation and national security.
The move comes as Anthropic, a major player in the artificial intelligence sector, reports exponential growth, with its revenue run rate soaring from $1 billion to $7 billion in the last nine months alone. Amodei's clarification aims to set the record straight on the company's bipartisan engagement and its strategic approach to AI development and safety.
Key Takeaways
- Anthropic is actively working with the U.S. government, including a $200 million contract with the Department of War.
- CEO Dario Amodei supports a uniform federal standard for AI regulation over a patchwork of state laws.
- The company refutes claims of unique political bias, citing independent studies that find its models less biased than some competitors.
- Anthropic is the only major AI lab that restricts the sale of its services to companies controlled by the People's Republic of China.
Bipartisan Engagement and Government Contracts
In his statement, Dario Amodei detailed Anthropic's deep involvement with the U.S. federal government, spanning multiple departments and initiatives. This collaboration underscores the company's role in advancing national interests through artificial intelligence.
A significant partnership highlighted is a two-year, $200 million agreement with the Department of War, awarded in July. This contract focuses on prototyping advanced AI capabilities intended to enhance national security. This direct military collaboration positions Anthropic as a key technology partner for the nation's defense apparatus.
Rapid Growth and Government Integration
Anthropic's revenue run rate has grown from $1 billion to $7 billion in the past nine months, making it one of the fastest-growing software companies in history. During this period, it has secured major government partnerships, including making its Claude models available for just $1 across the federal government via the General Services Administration (GSA).
Amodei also noted that the company's Claude AI models are deployed across classified networks through partners like Palantir and at research facilities such as Lawrence Livermore National Laboratory. He stressed that the company's engagement is bipartisan, noting that Anthropic has hired policy experts from both Republican and Democratic administrations and includes former senior Trump administration officials on its advisory council.
"I fully believe that Anthropic, the administration, and leaders across the political spectrum want the same thing: to ensure that powerful AI technology benefits the American people and that America advances and secures its lead in AI development," Amodei stated.
He mentioned a positive conversation with former President Trump on U.S. leadership in AI at an energy summit and the company's support for the Trump administration's AI Action Plan. This outreach is part of a broader strategy to treat AI policy as a matter of national importance, separate from partisan politics.
Navigating the AI Regulation Debate
A central theme of Amodei's message was the urgent need for clear and consistent AI regulation. He reiterated Anthropic's long-held position that a uniform federal approach is preferable to a fragmented system of state-level laws.
However, acknowledging the speed of AI development, Amodei explained the company's decision to support a specific California bill, SB 53. He described the bill as "carefully designed" to apply only to the largest AI developers.
The California Compromise: SB 53
Anthropic's support for California's SB 53 is based on a key provision it helped propose: an exemption for any company with an annual gross revenue below $500 million. This measure ensures that the regulation, which requires public disclosure of safety protocols for frontier models, targets major players like Anthropic without burdening smaller startups. Amodei stated this was to protect the startup ecosystem, which includes tens of thousands of Anthropic's customers.
Amodei also addressed Anthropic's opposition to a proposed 10-year moratorium on state-level AI laws, which was part of a larger federal bill. He clarified that the company's objection was to a provision that would block any state action without providing a federal alternative. That specific amendment was overwhelmingly defeated in the Senate by a 99-1 vote, a result supported by both parties.
Confronting Claims of Political Bias
Amodei directly confronted allegations that Anthropic's AI models exhibit a unique political bias. He argued that such claims are "unfounded and directly contradicted by the data."
To support his position, he referenced two independent studies:
- A January study from the Manhattan Institute, a conservative think tank, which found Anthropic's then-current model to be less politically biased than models from most other major providers.
- A May study from Stanford University on user perceptions of bias, which showed that models from several other companies were rated as more biased than Anthropic's.
He acknowledged that no AI model from any company is perfectly neutral. "Models learn from their training data in ways that are not yet well-understood, and developers are never fully in control of their outputs," Amodei explained. He cautioned against cherry-picking specific AI responses to paint a picture of pervasive bias, noting that the company is making rapid progress toward its goal of political neutrality.
A Firm Stance on China and National Security
Perhaps one of the most forceful points in the statement was on the topic of international competition, particularly with China. Amodei argued that the primary threat to American AI leadership is not domestic regulation but the risk of empowering strategic rivals.
"The real risk to American AI leadership isn't a single state law that only applies to the largest companies—it's filling the PRC's data centers with US chips they can't make themselves," he asserted.
He revealed that Anthropic is the only frontier AI company to explicitly restrict the sale of its AI services to companies controlled by the People's Republic of China (PRC). This decision, he said, means forgoing significant short-term revenue to prevent the company's technology from being used to benefit the Chinese Communist Party's military and intelligence services.
This policy aligns Anthropic with lawmakers like Senators Tom Cotton and Josh Hawley, who have warned against supplying China with advanced technology. By taking this stance, Anthropic positions itself as a key partner in maintaining America's technological edge in the global AI race.
Amodei concluded by quoting Vice President JD Vance's view on AI: "The answer is probably both [good and bad], and we should be trying to maximize as much of the good and minimize as much of the bad." Amodei affirmed this perfectly captures Anthropic's mission, stating the company is ready to work with anyone to achieve that goal.





