The California Assembly is set to vote this week on a bill aimed at curbing the risk of artificial intelligence being used for malicious purposes, such as cyberattacks or the development of biological weapons.
California Senate Bill 1047, authored by California Senator Scott Wiener, would be the first bill in the nation to require safety testing for AI companies building large-scale models.
The California Legislature is considering dozens of AI-related bills this session, but Wiener’s proposal, the Safe and Secure Innovation for Cutting-Edge Artificial Intelligence Models Act, has attracted national attention due to vocal opposition from Silicon Valley, the home of AI development in the U.S. Opponents argue that imposing burdensome technical requirements and potential fines in California would effectively stifle the country’s innovation and international competitiveness.
OpenAI is the latest AI developer to voice its opposition, arguing in a letter on Wednesday that AI regulation should be left to the federal government and that companies would leave California if the bill passes.
The state Assembly is set to vote on the bill, which Wiener recently revised in response to criticism from the tech industry, though he said the new language doesn’t address all of the issues the industry has raised.
“This is a sensible, lightweight bill that will in no way stifle innovation and will help us get ahead of the risks that come with powerful technology,” Wiener told reporters at a press conference on Monday.
What effect will this bill have?
The bill would require companies building large AI models that cost more than $100 million to train to limit any significant risks to their systems uncovered during safety tests, including creating a “full shutdown” capability — a way to stop potentially dangerous models in hazardous conditions.
Developers would also be required to create a technical plan to address safety risks and to retain a copy of that plan for as long as the model is available, an additional five years. Companies with large AI operations, such as Google, Meta, and OpenAI, have already made voluntary commitments to manage AI risks to the Biden administration, but California’s bill would create legal obligations and enforcement powers for these companies.
Each year, a third-party auditor evaluates the company’s compliance with the law. Additionally, companies must document their compliance with the law and report any safety incidents to the California Attorney General. The Attorney General’s office can assess civil penalties of up to $50,000 for a first violation and an additional $100,000 for subsequent violations.
What are the criticisms?
Much of the tech industry has criticized the proposed bill as too burdensome, with Antropic, a hot-button AI company that touts its focus on safety, arguing that previous versions of the bill would have created complex legal obligations that would stifle AI innovation, including the right of California’s attorney general to sue for negligence even if no safety disaster occurs.
OpenAI suggested that if the bill passes, companies would leave California to avoid its requirements, and argued that AI regulation should be left to Congress to prevent confusing legislation being enacted across states.
Wiener countered that the idea that companies would flee California is a “stale argument,” noting that the bill’s provisions would apply to companies that provide services to California residents even if they are not headquartered in California.
Last week, eight members of the US Congress urged Gov. Gavin Newsom to veto SB-1047 because it would impose obligations on companies that make and use AI. Rep. Nancy Pelosi joined her colleagues in opposing the bill, calling it “well-intentioned but ill-informed.” (Weiner is running for the Speaker of the House honorary seat, which could put him up against his daughter, Rep. Christine Pelosi, in the future, according to Politico.)
Speaker Pelosi and other lawmakers are siding with the “godmother of AI,” Dr. Fei-Fei Li, a Stanford University computer scientist and former Google researcher who said in a recent op-ed that the bill would “harm our growing AI ecosystem,” especially smaller developers who are “already at a disadvantage against today’s tech giants.”
What are advocates saying?
The bill has garnered support from various AI startups, Notion co-founder Simon Last, and AI “godfathers” Yoshua Bengio and Geoffrey Hinton, who said the bill would be a “positive, reasonable step” to make AI safer and encourage innovation.
Without adequate safeguards, supporters of the bill fear that unchecked AI could have serious, existential consequences, including increased risks to critical infrastructure and the creation of nuclear weapons.
Wiener defended the bill as “common sense and lightweight,” noting that only the largest AI companies would be required to implement safeguards, and praised California’s leadership in U.S. tech policy, but questioned whether Congress would pass a substantive AI bill anytime soon.
“California has repeatedly stepped in to protect its residents and fill gaps left by Congressional inaction,” Wiener responded, pointing to the lack of federal action on data privacy and social media regulation.
What’s next?
In his latest statement about the bill, Wiener said the latest amendments take into account many of the concerns expressed by the AI industry. The current version of the bill makes lying to the government a civil penalty, instead of the criminal one in the original bill. It also removes a proposal for a new state regulator to oversee AI models.
In a letter to Newsom, Anthropik said the benefits of the revised bill likely outweigh the potential harm to the AI industry, with the main benefits being public transparency about AI safety and encouraging companies to invest in reducing risk. But Anthropik remains concerned about the possibility of overly broad enforcement and expanding reporting requirements.
“We think it’s important to have some kind of framework governing cutting-edge AI systems that largely meets these three requirements,” Anthropik CEO Dario Amodei told the governor, regardless of whether that framework is SB-1047.
The California Legislature must pass the bill by the end of the session on August 31. If it passes, it would go to Governor Gavin Newsom for final approval by the end of September. The governor has not said whether he plans to sign the bill.