A controversial bill that would regulate how AI models are developed and trained is inching closer to becoming law in California, frustrating many in the field.
California Senate Bill 1047 AI companies developing models costing more than $100 million will be required to build robust safety frameworks into their models.
The technology industry, many of whose companies are based in Silicon Valley, is reportedly debating how the bill will affect their operations.
SB 1047 would require AI developers to include kill switches, conduct annual audits for safety compliance, and prohibit the creation, use, or distribution of potentially dangerous models.
Elon Musk, who is developing the recently criticized Grok AI platform For spreading false informationexpressed support for the bill.
“This is a tough call, and it’s going to upset some people, but I think, all things considered, California should probably pass SB 1047, the AI Safety Act,” Musk said. Post to X on monday.
The billionaire tech entrepreneur also claimed he has been a widespread advocate of AI regulation for about two decades, and said he has called for stricter regulatory oversight.
But there are also strong opponents of the bill, including OpenAI, a company co-founded by Musk.
The San Francisco technology company behind the popular language learning model ChatGPT has letter The bill’s author, Rep. Scott Wiener (D-San Francisco), argued last week that the bill would undermine Silicon Valley’s ability to become a global leader in AI.
Andrew Ng, former head of Google’s deep learning AI research project “Deep Brain,” also said, Take aim He spoke out against the bill in June, arguing that it would “hold creators of large-scale AI models liable if someone uses their models.”
“I am deeply concerned about California’s bill, SB-1047,” Ng said. Tweeted at that time“This is a lengthy and complex bill with many parts that require things like safety assessments and model shutdown capabilities.”
If the bill becomes law, AI developers would have to follow five key rules, including being able to shut down their models quickly and creating a written safety and security plan. Developers would also have to keep an unredacted copy of this safety plan for as long as the model is available, plus five years, and keep records of any updates.
Beginning Jan. 1, 2026, developers will be required to hire an independent auditor annually to review compliance with the law and keep a full audit report on file for the same period as the safety plan.
Upon request, developers will be required to provide the Attorney General with access to safety plans and audit reports. Additionally, developers will be prohibited from using or publishing their models for commercial or public purposes if there is a significant risk of causing serious harm.
The bill has now cleared a key congressional committee and is expected to be voted on by the full House later this week. The Senate already passed it with strong support in May.
If the Legislature approves it, the bill will go to Gov. Gavin Newsom on Sept. 30 to decide whether to veto it or enact it into law.
Editor: Sebastian Sinclair
Generally highly intelligent Newsletter
A weekly AI journey narrated by generative AI model Gen.