U.S. Senators Josh Hawley (R-Mo.) and Richard Blumenthal (D-Conn.) have introduced new bipartisan legislation aimed at evaluating the risks posed by advanced artificial intelligence systems. The proposed law, titled the Artificial Intelligence Risk Evaluation Act, would establish a formal program to assess potential threats before new AI technologies are deployed to the public.
The bill seeks to address growing concerns about national security, civil liberties, and public safety. It would mandate that developers of sophisticated AI models submit their systems for review, focusing on scenarios such as loss of control or weaponization by foreign adversaries.
Key Takeaways
- Senators Hawley and Blumenthal introduced the Artificial Intelligence Risk Evaluation Act.
- The bill proposes an evaluation program for advanced AI systems housed within the Department of Energy.
- Developers would be required to submit their AI systems for review before public deployment.
- The program aims to collect data on potential adverse AI incidents, including national security threats.
- This legislation continues a bipartisan effort by the senators to establish regulatory guardrails for AI.
Details of the Proposed Legislation
The Artificial Intelligence Risk Evaluation Act represents a significant step by lawmakers to create a structured framework for managing the potential dangers of rapidly advancing AI. The core of the bill is the creation of a dedicated evaluation program designed to proactively identify and analyze risks associated with powerful AI models.
According to a memo outlining the bill, this program would be established within the Department of Energy. Its primary function would be to "collect data on the likelihood of adverse AI incidents." This includes a range of high-stakes scenarios that have become central to the debate on AI safety and regulation.
The legislation is built on a principle of pre-deployment review. Under its terms, companies and other entities developing advanced AI would be legally obligated to submit information about their systems to the new program. This requirement acts as a gatekeeper, preventing the release of new technologies until compliance is met.
Mandatory Pre-Deployment Compliance
A central feature of the bill is its requirement that developers of advanced AI cannot deploy their systems until they have fully complied with the evaluation program's requirements. This marks a shift toward proactive, rather than reactive, regulation.
How the Evaluation Program Would Function
The bill outlines a clear process for how the risk evaluation would be conducted. The program at the Department of Energy would serve as a central hub for assessing the capabilities and potential failure modes of next-generation AI.
Responsibilities of the Program
The evaluation body would be tasked with several key responsibilities:
- Data Collection: Systematically gather information on the potential for AI systems to cause harm, whether intentionally or accidentally.
- Risk Analysis: Analyze the probability of specific adverse events, such as an AI system operating beyond human control.
- Security Assessment: Evaluate the vulnerability of AI models to being co-opted for malicious purposes, such as weaponization by hostile nations or terrorist groups.
This structured approach is intended to provide the government with a clearer understanding of the technology's landscape and its evolving risks, moving beyond theoretical discussions to data-driven assessments.
Obligations for AI Developers
For technology companies, the legislation would introduce new compliance hurdles. They would need to provide detailed information about their most powerful AI systems. While the exact scope of required information is not yet fully detailed, it is expected to include technical specifications, training data methodologies, and internal safety testing results.
The prohibition on deployment before compliance ensures that the evaluation process has real authority. It prevents a scenario where powerful AI is released into the market without any independent oversight of its potential for large-scale harm.
A Continuing Bipartisan Effort
This bill is not the first collaboration between Senators Blumenthal and Hawley on AI. In July 2023, they introduced legislation to protect content creators from unauthorized use of their work by AI systems. During the last Congress, they also supported a broader legislative framework to establish guardrails for the technology.
Addressing National Security and Societal Risks
The primary motivation behind the Artificial Intelligence Risk Evaluation Act is the concern that AI development is outpacing the ability of governments and society to manage its consequences. The bill specifically targets existential and large-scale risks rather than everyday consumer applications.
The legislation highlights two critical areas of concern:
- Loss-of-Control Scenarios: This refers to the potential for an AI system to behave in unintended ways that cannot be easily corrected or shut down by its human operators, potentially leading to cascading failures in critical infrastructure or economic systems.
- Weaponization by Adversaries: This involves the risk of hostile actors stealing a powerful AI model and repurposing it for malicious uses, such as developing novel cyberweapons, creating sophisticated propaganda, or designing chemical or biological agents.
In a statement regarding the bill, Senator Hawley emphasized the need for legislative action to protect core American interests.
"Congress must not allow our national security, civil liberties, and labor protections to take a back seat to AI."
The Broader Context of AI Regulation
The introduction of this bill reflects a persistent, bipartisan interest on Capitol Hill in establishing rules for the AI industry. Lawmakers from both parties have expressed concerns that without government oversight, the race for AI supremacy could lead to dangerous and irreversible outcomes.
This legislative push exists alongside a more cautious approach from the White House, which has warned that overly restrictive regulation could stifle American innovation. The administration has expressed concerns that heavy-handed rules might put U.S. technology companies at a disadvantage in the global competition with China, which is also investing heavily in AI development.
The Hawley-Blumenthal bill attempts to find a middle ground by focusing specifically on the most advanced and potentially dangerous systems, rather than applying broad restrictions to all forms of AI research and development. By targeting what they see as the highest-risk technologies, the senators aim to create a safety net for national security without halting progress across the entire industry.
As the bill moves forward for consideration, it will likely fuel the ongoing debate about the proper balance between fostering innovation and ensuring public safety in the age of artificial intelligence. Its focus on mandatory, pre-deployment evaluation makes it one of the most assertive regulatory proposals to date in the United States.