The Trump administration has publicly distanced itself from a major private-sector initiative aimed at evaluating artificial intelligence tools in healthcare. Officials from the Department of Health and Human Services (HHS) expressed concerns that the Coalition for Health AI (CHAI) could limit competition and favor established companies over startups in the rapidly growing medical AI market.
The coalition, which includes prominent technology firms and health systems, was formed to address the lack of formal regulation for AI in medicine. However, top administration officials have stated that the group does not represent the government's position and have warned against what they describe as a potential "cartel."
Key Takeaways
- Top HHS officials under the Trump administration have stated they do not support the Coalition for Health AI (CHAI).
- Deputy HHS Secretary Jim O’Neill raised concerns that CHAI could become a "cartel" that stifles innovation from smaller companies.
- CHAI's CEO, Brian Anderson, maintains that the coalition's programs are voluntary and intended to be a resource.
- The conflict highlights the ongoing debate over how to regulate AI in healthcare, a field with few formal government rules.
Administration Voices Strong Opposition
Senior officials at the Department of Health and Human Services have made their position clear regarding the Coalition for Health AI. Deputy HHS Secretary Jim O’Neill directly challenged the group's role in shaping the future of medical AI governance.
"They don’t speak for us," O’Neill stated in comments to POLITICO, signaling a significant break from the private-sector-led approach.
The administration's primary concern is that CHAI could create a closed system where large, well-funded members dictate the standards for the entire industry. O'Neill warned this could disadvantage smaller startups attempting to enter the market. He described the potential for the coalition to become a "cartel," effectively controlling access and innovation.
"He said he’s heard from industry 'that if you want to work in the space, you have to be a member, and we just want to make clear that that is not the case,'" the original report noted. This sentiment underscores the administration's preference for a more open and competitive environment for AI development.
Concerns Over Regulatory Capture
The administration's worries are shared by some members of Congress. Republicans, including House Energy and Commerce Chair Brett Guthrie (R-Ky.) and Rep. Jay Obernolte (R-Calif.), have previously suggested that CHAI could facilitate regulatory capture. This occurs when an industry gains significant influence over the regulatory bodies that are supposed to oversee it.
The core of this argument is that well-established companies within CHAI, such as Microsoft, OpenAI, the Mayo Clinic, and Duke Health, could set standards that benefit their own products while creating barriers for new competitors.
Background on CHAI's Formation
CHAI was established to create a framework for testing and validating AI tools in healthcare, filling a gap left by limited formal government regulation. The Biden administration had previously shown support for the initiative, with two officials serving as non-voting members on its board before resigning last year due to potential conflicts of interest. This past support has led some current officials to view CHAI as a creation of the previous administration.
CHAI Defends Its Role and Mission
In response to the criticism, CHAI's leadership has emphasized the voluntary nature of its work. CEO Brian Anderson clarified that the organization is not attempting to force any company or health system to participate in its programs.
"Everything that we do in CHAI is voluntary and not required," Anderson stated. He positioned the coalition as a collaborative resource available to both industry and government as they navigate the complexities of AI.
Anderson also noted that CHAI is actively working to engage with Republican lawmakers to address their concerns. To facilitate this, the organization hired Susan Zook, a former healthcare aide to a senior Republican senator, as a lobbyist. "We hired Susan because we wanted to build bridges into that community," Anderson explained.
CHAI Membership and Scope
According to CEO Brian Anderson, the Coalition for Health AI has grown to include approximately 3,000 members. The organization states that its membership includes not only large corporations but also startups and smaller healthcare providers, aiming for broad industry representation.
Despite its large membership, CHAI's concrete outputs are still in early stages. The group has published guidance on the responsible use of AI but has so far certified only two assurance labs to vet a limited number of AI tools. Recently, Microsoft's Chief Scientific Officer, Eric Horvitz, stepped down from the board, which Anderson described as the conclusion of a time-limited position.
The Regulatory Vacuum for Medical AI
The debate over CHAI's role takes place against a backdrop of significant regulatory uncertainty. Artificial intelligence is being rapidly adopted in hospitals for tasks ranging from improving diagnoses to recommending treatments, yet there is little formal oversight.
The Food and Drug Administration (FDA) has authority over some of these tools, but its framework was not designed for software that can learn and change over time. Former FDA Commissioner Robert Califf has previously stated that the agency may lack the necessary authority and would require a much larger staff to properly monitor these evolving AI products.
This regulatory gap is what prompted the idea of private-sector-led assurance labs, a concept vocally supported by officials in the Biden administration. The goal was to have an independent, industry-driven network to test AI tools and provide guidance to doctors and hospitals on their safety and effectiveness.
The Path Forward for AI Oversight
With the Trump administration signaling its disapproval of CHAI, the future of medical AI oversight remains uncertain. Without a consensus on a private-sector solution, hospitals and physicians will likely continue to evaluate and adopt new AI technologies on their own.
The administration has indicated its preferred approach focuses on government-led transparency. O'Neill expressed a desire for HHS to share its data on AI tools, enabling healthcare providers to make better-informed purchasing decisions. He also noted that federal health agencies are exploring what new data they might need to collect to regulate AI effectively.
In a step toward this goal, the FDA recently published a request for information. The agency is seeking feedback from a wide range of stakeholders—including patients, doctors, and tech companies—on how the performance of medical AI should be measured and evaluated. This suggests the administration is moving toward developing its own framework, separate from private-sector efforts like CHAI.
President Trump has consistently advocated for a laissez-faire approach to regulation to ensure the U.S. remains a global leader in AI. This philosophy aligns with the administration's pushback against what it perceives as a restrictive, quasi-regulatory body in CHAI.





