California AI safety bill heads to Newsom after lawmakers' approval
California lawmakers passed a revised AI safety bill requiring firms to disclose and certify testing methods, leaving Governor Gavin Newsom to decide whether it becomes law after his 2024 veto of a stricter version.
-
FILE - People walk around the California State Capitol, Aug. 5, 2024, in Sacramento, Calif (AP Photo/Juliana Yamada, File)
Lawmakers in California have approved a proposal that would compel artificial intelligence developers to demonstrate that their systems meet safety standards, according to a report by Politico. The measure now heads to Governor Gavin Newsom, who will decide whether it becomes law.
The initiative comes a year after Newsom rejected an earlier attempt at regulating AI, SB 1047, arguing at the time that the proposal was too rigid and risked stifling innovation. This time, lawmakers leaned heavily on The California Report on Frontier AI Policy, a June 2025 expert report commissioned by Newsom, which recommended a "trust but verify" approach informed by empirical research, modeling, and risk assessment rather than blanket regulatory mandates.
If enacted, the legislation would obligate AI firms to disclose the methods used to evaluate their models' security and submit them for certification. Key provisions (drawing from SB 53, which mirrors many of the expert report's recommendations) include incident reporting within strict time-frames for "catastrophic risk" events, proportional transparency requirements based on company size, revenue, and computational resources, and protections for whistleblowers.
Proponents argue that such requirements are necessary to ensure accountability in an industry whose tools are rapidly entering daily life. Support also comes from AI firms like Anthropic and some engagement from OpenAI, helping to reduce opposition from parts of industry that rallied against last year's stricter bill.
"Now California Governor Gavin Newsom is to make the final decision on signing the bill into law," Politico reported, noting that the state could set a precedent for how AI oversight develops across the United States.
Read more: Anthropic reaches $1.5 bln settlement in AI book piracy lawsuit