Tech Policy20 views5 min read

California AI Safety Bill Awaits Governor's Decision

A new California bill, SB 53, awaits the governor's signature. It would require major AI companies to publish safety reports on catastrophic risks.

Olivia Vance
By
Olivia Vance

Olivia Vance is a public policy correspondent for Neurozzio, specializing in the intersection of technology, law, and governance. She reports on legislative efforts to regulate emerging technologies like artificial intelligence and their impact on society and political discourse.

Author Profile
California AI Safety Bill Awaits Governor's Decision

A new artificial intelligence safety bill, SB 53, now sits on California Governor Gavin Newsom’s desk, awaiting a signature or veto within the next few weeks. The legislation, introduced by State Senator Scott Wiener, would establish some of the first mandatory safety reporting requirements for major AI companies in the United States.

This bill represents a second attempt at AI regulation after a more stringent proposal, SB 1047, was vetoed in 2024 following significant opposition from the tech industry. Unlike its predecessor, SB 53 has gained notable support from some key players in Silicon Valley.

Key Takeaways

  • California's SB 53 bill, if signed by Governor Newsom, will require large AI companies to publish safety reports.
  • The bill focuses on catastrophic risks, such as AI's potential use in creating bioweapons or enabling large-scale cyberattacks.
  • Unlike a previous bill, SB 53 has received support from tech companies like Anthropic and a cautiously positive response from Meta.
  • The legislation also includes whistleblower protections for AI employees and establishes a state-run computing resource called CalCompute.

A New Approach to AI Safety

Senator Scott Wiener's latest legislative effort marks a significant shift in strategy from his 2024 bill, SB 1047. The previous bill aimed to make technology companies legally liable for potential harms caused by their AI systems, a provision that sparked a strong backlash from Silicon Valley.

Tech leaders argued that SB 1047 would hinder innovation and damage America's competitive edge in the rapidly growing AI sector. Governor Newsom ultimately vetoed the bill, citing similar concerns. The veto was celebrated by some in the tech community, who felt the legislation was overly restrictive.

In contrast, SB 53 takes a more targeted approach, focusing on transparency rather than broad liability. This change has resulted in a more favorable reception from the industry.

From Liability to Transparency

The primary difference between the failed SB 1047 and the new SB 53 is the shift in focus. SB 1047 proposed holding companies liable for damages caused by their AI, whereas SB 53 mandates that companies self-report on how they test for and mitigate severe risks.

Industry Support Signals a Change

The reception for SB 53 has been notably different. AI company Anthropic has officially endorsed the bill. A spokesperson for Meta, Jim Cullinan, told TechCrunch that the company supports regulation that finds a balance between safety and innovation, describing SB 53 as a "step in that direction."

Former White House AI policy adviser Dean Ball characterized SB 53 as a "victory for reasonable voices," suggesting a high probability that Governor Newsom will sign it into law. This support indicates a potential consensus building around a baseline for AI safety reporting.

What the Bill Mandates

If signed into law, SB 53 would impose new obligations on the world's largest AI developers, including companies like OpenAI, Google, Anthropic, and xAI. Currently, these companies face no legal requirement to disclose their safety testing procedures, though many do so voluntarily.

The bill is designed to standardize these disclosures for the most powerful AI models developed by the largest companies.

Who is Affected?

The bill's requirements specifically target leading AI labs with annual revenues exceeding $500 million. This narrow focus is intended to apply regulations to the companies with the resources and scale to develop the most advanced AI systems, while not burdening smaller startups.

Focus on Catastrophic Risks

SB 53 is narrowly tailored to address the most severe potential AI risks. According to Senator Wiener, the legislation was designed to focus specifically on what he terms "catastrophic risk." The primary areas of concern are:

  • The potential for AI to contribute to the creation of chemical or biological weapons.
  • The use of AI to execute cyberattacks on a massive scale.
  • Any application of AI that could lead to a significant number of human deaths.

"We’re focused on one specific category of risk," Senator Wiener explained, noting that the idea came from founders and technologists within the AI industry who were concerned about these high-stakes scenarios.

Broader Implications for AI Regulation

The debate around SB 53 highlights a larger question about who should regulate artificial intelligence: state governments or the federal government. Some in the tech industry, including OpenAI and venture capital firm Andreessen Horowitz, have argued that a patchwork of state laws could create confusion and hinder commerce.

"I’m the guy who represents San Francisco, the beating heart of AI innovation... But we’ve also seen how the large tech companies — some of the wealthiest companies in world history — have been able to stop federal regulation."
- Senator Scott Wiener

Senator Wiener contends that states, particularly California, must take the lead because he believes federal action is unlikely. He expressed a lack of faith in the federal government's ability to pass meaningful AI safety laws, pointing to the current administration's emphasis on AI growth over safety.

"I’m not here this morning to talk about AI safety... I’m here to talk about AI opportunity," Vice President J.D. Vance stated at a recent conference, reflecting a shift in federal priorities.

Additional Provisions in SB 53

Beyond safety reporting, the bill includes two other significant components aimed at fostering a safer and more competitive AI ecosystem in California.

  1. Whistleblower Protections: The bill creates secure channels for employees at AI companies to report safety concerns to government officials without fear of retaliation.
  2. CalCompute Initiative: It establishes a state-operated cloud computing cluster named CalCompute. The goal is to provide AI research resources to academics, startups, and other organizations, reducing the reliance on computing infrastructure owned by large tech corporations.

The Path Forward

As Governor Newsom considers the bill, its fate could set a precedent for AI regulation across the United States. Senator Wiener has stated that the bill was drafted with the governor's previous feedback in mind.

"My message is that we heard you," Wiener said, referencing the detailed veto message for SB 1047. "You vetoed SB 1047... You wisely convened a working group that produced a very strong report, and we really looked to that report in crafting this bill."

The decision now rests with the governor. If signed, SB 53 will make California the first state to mandate specific safety reporting from the giants of the AI industry, potentially creating a model for other states and even the federal government to follow.