The release of OpenAI’s ChatGPT in late 2022 is like the firing of a starter pistol, kicking off a race among big tech companies to develop more powerful generative AI systems. With billions of dollars of venture capital flowing into AI startups, giants like Microsoft, Google, and Meta are rushing to deploy new artificial intelligence tools.
At the same time, alarm bells were beginning to sound among those working and studying AI: The technology was evolving faster than anyone had anticipated, and there were fears that in the rush to corner the market, companies would release products before they were safe.
More than 1,000 researchers and industry leaders will attend the conference in spring 2023. He asked for a six-month suspension. In developing cutting-edge artificial intelligence systems, AI labs are racing to deploy “digital minds” that even their developers cannot understand, predict or reliably control, they warned, and the technology poses “serious risks to society and humanity.”Tech company leaders called on lawmakers to develop regulations to prevent harm.
In that environment, state Sen. Scott Wiener (D-San Francisco) spoke with industry experts and Senate Bill 1047Safe and Secure Innovation for Cutting-Edge Artificial Intelligence Models Act: This bill is an important first step toward responsible AI development.
State lawmakers Dozens of banknotes Wiener took a different approach, targeting a range of concerns about AI, including election misinformation and protecting artists’ work. His bill focuses on preventing catastrophic harm if AI systems are used in the wrong way.
SB 1047 would require developers of the most powerful AI models to have testing procedures and safeguards in place to prevent the technology from being used to shut down the power grid, develop biological weapons, launch large-scale cyberattacks, or cause other serious harm. State attorneys general could sue if developers fail to use due diligence to prevent catastrophic harm. The bill also provides protections for whistleblowers within AI companies and creates CalCompute, a public cloud computing cluster that helps startups, researchers, and academics develop AI models.
The bill is supported by leading AI safety organizations, including some of the people known as the Godfathers of AI. In a letter to Governor Gavin Newsom, “This is a remarkably light bill compared to the scale of the risks we face,” he argued.
But the bill continues to draw fierce opposition from tech companies, investors and researchers, who argue that the bill unfairly places the onus on model developers to predict harm that users might cause, and that imposing liability would discourage developers from sharing their models, stifling innovation in California.
Last week, eight California state lawmakers sent a letter to Governor Newsom urging him to veto SB 1047 if it passes the state legislature. They argued that the bill is premature and “misplaces emphasis on virtual risks,” and that lawmakers should focus on regulating uses of AI that are currently causing harm, such as the use of deepfakes in election ads and revenge porn.
There are many good bills that address immediate and tangible misuses of AI. But the need to anticipate and prevent future harms remains, especially when experts in the field are calling for action. SB 1047 raises familiar questions for the tech industry and lawmakers: When is the right time to regulate an emerging technology? What is the right balance between encouraging innovation while protecting the public who must live with its consequences? Can the genie be put back in the bottle after the technology has been deployed?
There are risks to staying on the sidelines for too long. Today, lawmakers are still playing catch-up on data privacy and trying to curb harms on social media platforms. This isn’t the first time that leaders of big tech companies have publicly said they would welcome regulation of their products, only to then lobby hard to block specific proposals.
Ideally, the federal government should take the lead on AI regulation, avoiding a patchwork of state policies. But Congress has proven unable or unwilling to regulate big tech companies. For years, Protect the privacy of your data Efforts to mitigate online risks for children have stalled. In the absence of federal action, California, home to Silicon Valley, has chosen to lead the way in creating the first regulations on net neutrality, data privacy, and online safety for children. AI is no exception. Indeed, House Republicans have already stated that they will not support new AI regulations.
By passing SB 1047, California could pressure the federal government to enact standards and regulations that supersede state regulations, and the law could act as an important backstop until that happens.