Significant advances in artificial intelligence have industry leaders warning of the potential for significant risks, including the tampering of weapons systems and large-scale cyberattacks.
This week, state lawmakers in California, home to many of the biggest AI companies, proposed a landmark bill that would impose regulations to address these risks.
The bill would require testing of a wide range of AI products before making them available to users. The bill adds that all major AI models must be equipped with a way to shut down the technology if something goes wrong.
“When we talk about safety risks associated with extreme hazards, it’s far better to put safeguards in place before those risks occur, rather than playing catch-up,” said the bill’s sponsors, state senators. Scott Wiener told ABC. news. “Let’s move this forward.”
Here’s what you need to know about the bill and how it could affect AI regulation nationwide.
What will this bill do to police AI risks?
The bill would increase the scrutiny large-scale AI models face before they become widely available and ensure state regulators test products before release.
In addition to requiring emergency off switches, the bill would also implement hacking protections to make AI less vulnerable to bad actors.
To strengthen enforcement, the measure would create the Frontier Model Division within the California Department of Technology as a regulatory enforcement vehicle.
Because the law focuses on extreme risk, it does not apply to small-scale AI products, Wiener said.
“Our goal is to foster innovation with safety in mind,” Wiener added.
Additionally, the bill would foster AI development by creating CalCompute, a public initiative that facilitates the sharing of computing power among companies, researchers, and community groups.
The initiative will help lower the technology barrier for small businesses and organizations that may lack the vast computing power enjoyed by large corporations, said director of the nonprofit Economic Security California. , Terre Ore told ABC News.
“Expanding that access will enable research, innovation, and AI development that is in the public interest,” said Ole, whose organizations helped develop this feature of the bill.
Sarah Myers West, managing director of the AI Now Institute, a nonprofit organization that supports AI regulation, praised the measure’s precautionary approach.
“It’s great that there’s an emphasis on addressing and mitigating harm before it hits the market,” Myers-West told ABC News.
However, she added that many of the current risks posed by AI remain unresolved, such as bias in algorithms used to set employee pay or grant access to healthcare.
“There are so many places where AI is already being used to influence people,” Meyers-West said.
Wiener said the California Legislature is taking up other bills to address some of the ongoing harm being caused by AI. “We’re not going to solve all problems with one bill,” Wiener added.
How might this bill affect AI laws across the country?
California’s extreme AI risk bill comes amid a proliferation of AI-related bills in statehouses across the country.
As of September, state legislatures had introduced 191 AI-related bills in 2023, a 440% increase from the same period last year, according to the BSA Software Alliance, an industry group.
But the proposed bill in California carries special weight because many of the biggest AI companies are based in the state, said Economic Security California’s Ole.
“California regulations set the standard,” Olle said. “Complying with these standards in California will have an impact on the market.”
Meyers-West said that despite recent policy discussions and hearings, Congress has made little progress on comprehensive measures to address AI risks.
“Congress appears to be at an impasse,” Myers-West added. “So the state has a very important role to play.”
Dylan Hoffman, executive director of California and the Southwest for industry lobbying group TechNet, emphasized the importance of U.S.-based AI regulations that shape global rules around technology.
“The United States must set the standard for the responsible development and deployment of AI for the world,” Hoffman told ABC News in a statement. “We look forward to reviewing the bill and working with Senator Wiener to ensure that all AI policies benefit all Californians, address all risks, and strengthen our global competitiveness.”
In crafting the bill, Wiener said he would be considering a number of issues that President Joe Biden enacted in October, including the thresholds used to determine whether an AI model reaches a large enough scale to warrant regulation. He said he borrowed some concepts from the Ordinance.
Still, Wiener said he remains skeptical that federal legislation would be enacted that would mimic the approach taken by California’s bill.
Weiner added: “I hope Congress passes strong AI legislation that fosters innovation and promotes safety.” “I have no extreme confidence that Congress will take any action in the near future. I hope Congress proves me wrong.”