California Governor Gavin Newsom signed major AI legislation into law this week, establishing some of the country’s strongest safety regulations yet in Silicon Valley’s home state.
The law will require developers of the most advanced AI models to publish more details about safety steps taken in development and create more protections for whistleblowers at AI companies, among other provisions. It’s the first law of its kind in the US to specifically target catastrophic risks of cutting-edge frontier AI models.
It comes after Newsom vetoed a more muscular attempt at AI regulation a year ago from the same author, state Senator Scott Wiener. This latest iteration incorporated some of the less contentious elements from that first try, as determined by a working group report Newsom requested when he vetoed the 2024 bill.
AI safety advocates say the law is a good start that could inspire other states to act, but doesn’t necessarily go far enough. Detractors say its scope is too focused on large companies and that disclosures risk divulging proprietary information. OpenAI, Alphabet, and Meta had lobbied against the bill, claiming they would rather have federal regulation, while Anthropic supported it. The law also comes after the House of Representatives attempted to ban all states from regulating AI for years as part of President Trump’s budget bill; the Senate removed the ban from the bill before it passed.
“We now actually have a bill passed by a state in the US explicitly dealing with regulation of frontier AI,” Gideon Futerman, a special projects associate at the Center for AI Safety (CAIS), told us. “It’s really vital we now have these on the books, especially since there was this attempt to preempt and stop laws previously.”
What it means: For all the fuss over the law, Jessica Ji, a senior research analyst at Georgetown’s Center for Security and Emerging Technology, said she doesn’t expect the transparency requirements to lead to much change in how AI companies operate. Many of the biggest labs to which this law applies are already releasing the necessary safety information for their models.
“In terms of transparency. I don’t see it as a huge sea change,” Ji said. “Just having a requirement is truly better than nothing, but I think the bar is still quite low for information that could be useful for these companies to disclose.”
And because the whistleblower provision is tied to a specific definition of catastrophic risk—potential for more than 50 serious injuries or deaths or more than $1 billion in property damage—it is somewhat narrow, as well, Ji said.
Keep up with the innovative tech transforming business
Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.
While AI companies do currently publish safety information, Futerman said they can be somewhat inconsistent. And without legal requirements, they might back away from the practice in the future.
Thomas Woodside, a co-founder and senior policy advisor at the Secure AI Project, which sponsored the bill, said he hopes the law establishes a jumping off point for other state laws and further legislation in California.
“This is really a first step,” Woodside said. “As we get more evidence about what the most likely kinds of risks are and as some of this transparency helps us get that evidence, policymakers will continue to have an interest in updating these policies over time and thinking about new provisions that can be incorporated.”
Beyond California: The California law’s specific focus on catastrophic risk and the most advanced frontier models is echoed in a New York bill, the RAISE Act, that’s currently awaiting Governor Kathy Hochul’s signature.
Billy Richling, communications director for RAISE Act author and state Senator Andrew Gounardes, said the venture capital industry is spending heavily to lobby against it, but the bill has continued to gain support from AI safety advocates.
As the AI industry has kicked nationwide lobbying operations into gear, state lawmakers and policy experts are learning what these companies’ anti-regulation playbook looks like and what types of measures they are most likely to fight, Ji said. The fight over last year’s bill in California, SB 1047, was illustrative in that respect, she said.
“That gave people an idea of how much the companies would push against certain types of requirements, like where the lobbying energy was and what kind of coalitions will be built in opposition,” she said.
Futerman said the risks that advanced AI poses will ultimately require more ambitious legislation down the road.
“As CAIS as an organization, we would like to see much more rigorous regulation, both in California, and hopefully at the federal level,” he said. “It is a concrete and important step forward, but we need work that is much more rigorous across the board, including things like requirements for external audits, including prohibition on certain types of risky deployments.”