Three Silicon Valley lawmakers, Reps. Zoe Lofgren and Ro Khanna, and Speaker Pelosi, also spoke out in support of killing the bill. Even San Francisco Mayor London Breed, a longtime ally of Weiner’s, wrote in an open letter that “further work is needed to bring together industry, government, and community stakeholders before we can move forward with a legislative solution that doesn’t create unnecessary bureaucracy.”
talk KQED Forum Pelosi again criticized the bill on Thursday.
“California is the home and birthplace of AI, in our view. California has the knowledge, the technical savvy, the entrepreneurial spirit, and it has a responsibility to do the right thing instead of passing well-intentioned but ignorant legislation that doesn’t work.”
She also disputed the idea that she was opposed to Weiner’s bill because she fears he could face off against Weiner’s daughter when he runs for Speaker Pelosi’s seat after she steps down.
“They don’t know what they’re talking about,” she said. POLITICOwas first to report the news. “I don’t want California to get in trouble for something this serious that has nothing to do with the election.”“
SB 1047 has also garnered some high-profile supporters, including Elon Musk, who posted on X on Monday: “This is a tough call and it will anger some people, but all things considered, I think California should probably pass SB 1047, the AI safety bill. For over 20 years, I have advocated for regulating AI, just as we would regulate any product or technology that poses potential risks to the public.”
Weiner acknowledged an unlikely ally. “Elon Musk is not a fan of mine, and I’m not a fan of Elon Musk,” Weiner said. “But even with people who disagree with us completely, we can find common ground, and in this area, Elon and I have a lot in common. He’s been an advocate for AI safety for a long time, so this position is very consistent with his long history of doing so.”
Wiener said he revised the bill to reflect advice from leaders in the AI field, including safety groups, academics, startups and developers such as Amazon-backed Anthropic. The bill no longer allows the California attorney general to sue AI companies for failing to implement safety measures before a catastrophic event occurs. It also removed original language that would have created an office within the California Department of Technology to “ensure ongoing oversight and enforcement.”
Wiener, who chairs the Senate Budget Committee, told KQED that these changes were made primarily to increase the bill’s chances of being accepted by lawmakers and Gov. Gavin Newsom, as California faces a huge budget deficit. “My experience with Gov. Newsom is that he reviews bills fairly, he listens to the debate, he speaks to those who are for and against, and he makes an informed choice, and I’m confident he’ll do that here,” he said.
SB 1047 only affects companies building AI systems that cost more than $100 million to train, but critics argue that the mere threat of legal action by state attorneys general will discourage big tech companies from sharing open-source software with smaller companies, stifling innovation.
Last month, Anthropik warned in an open letter to Wiener that it could not support the bill unless it was amended to “minimize rigid, vague, or burdensome rules while respecting the evolving nature of risk reduction activities.” Anthropik is the first major generative AI developer to publicly signal its intention to work with Wiener on SB 1047.
“SB 1047 establishes a clear, predictable, and commonsense legal standard to help developers of the largest and most powerful AI systems efficiently build safety into the entire AI ecosystem that startups build,” wrote Nathan Calvin, senior policy counsel at the Center for AI Safety Action Fund, the lobbying arm of the Center for AI Safety, one of SB 1047’s co-sponsors.
But critics of the bill have sounded other alarms, including Stanford professor and former Google executive Andrew Ng, who detailed his concerns in a post on X that has been viewed by more than 1 million people. “SB 1047 will stifle open source AI and stifle AI innovation,” Ng wrote on KQED. “It makes the fundamental error of trying to regulate AI technology instead of addressing harmful applications. Worse, by making it harder for developers to release open AI models, it will hinder researchers’ ability to study cutting-edge AI and find problems, making AI less safe.”
When asked why California lawmakers are pursuing dozens of AI bills focused on individual issues, compared to the European Union and Colorado, which have opted for comprehensive legislation, Wiener said, “California’s system is different from other jurisdictions. We don’t tend to pick a subject and combine 10 different issues. We introduce individual bills. We try very hard to harmonize them.”
He acknowledged the advantages and disadvantages of this approach.
“But doing it this way allows us to have a more systematic approach in terms of addressing specific problems rather than trying to solve everything at once.”