California is a world leader in artificial intelligence. Thirty-five of the top 50 AI companies are headquartered here, and the state accounts for a quarter of the world’s AI patents, conference materials, and companies. But unfounded fears about “unregulated” AI threaten to undermine the country’s technological dynamism.
In fact, AI is already regulated, especially in California. But even this year, state legislatures introduced dozens of new AI-focused bills to fill the virtual regulatory void. If lawmakers overdo it, California could lose its lead in AI development.
In 2018, California enacted SB 1001, requiring companies and individuals to disclose when and how they use AI systems such as chatbots. SB 36, enacted in 2019, requires state criminal justice agencies to evaluate AI-powered pretrial tools for potential bias. Last October, California enacted AB 302, requiring a thorough inventory of all “high-risk” AI systems “proposed for use, development, or procurement, or currently being used, developed, or procured” by the state. Mandatory management.
A range of state and federal laws also apply to AI. The California Consumer Privacy Act, which governs how companies collect and manage consumer data, provides a “right to know” about the data companies collect, a “right to correct” inaccurate information, and a right to make requests from companies. , is committed to ensuring the privacy rights of Californians. Delete personal information. These privacy rights also apply to AI. For example, AI companies must notify California consumers about the personal information they collect and how that data will be used.
The CCPA also authorizes the California Privacy Protection Agency, a state agency, to enforce privacy regulations and introduce new regulations. The agency has already begun work on AI. On March 9, it voted 3-2 to move forward with drafting new regulations governing how companies use AI. These apply to companies with annual revenue of more than $25 million and companies that process the personal data of more than 100,000 Californians.
The proposed regulations would require companies to notify consumers about AI and allow consumers to opt out of its use. If a consumer opts in, companies must explain, upon request, how the AI will use their personal information. The draft rule would also expand risk assessment requirements for AI systems.
California law already regulates a wide range of AI use cases. Much of the rest is covered by federal law. At the request of the Biden administration, federal agencies are working hard to regulate AI. Last April, officials from the Federal Trade Commission, Department of Justice, Consumer Financial Protection Bureau, and Equal Employment Opportunity Commission issued a joint statement outlining the agencies’ strategies for applying existing laws and regulations to AI. Announced.
The FTC has repeatedly stated that “no statutory AI exemptions exist.” The commission’s authority to enforce unfair and deceptive trade practices and unfair competition practices extends to AI, so the agency can protect consumers across the country, including Californians, from a wide range of AI-related harms. can.
In December, the FTC banned Rite Aid from using AI facial recognition technology for five years after the retailer installed a biased surveillance system at its stores in major cities. The FTC is currently researching AI voice cloning technology and recently proposed new regulations that would ban AI-generated personal deepfakes. The rule could potentially hold AI platforms liable even if they “know or have reason to know.” [their AI] It is used to harm consumers through identity theft. ”
Despite these existing state and federal measures, lawmakers continue to stoke fears of so-called “AI legislative override.” Last December, California Congressman Ash Kalra (D-San Jose) vowed to protect the public from “unregulated AI.” And in February, California State Sen. Scott Wiener (D-San Francisco) expressed concern that “California’s government cannot afford to be complacent” regarding AI regulation.
But AI is regulated, and California is not complacent. The conventional wisdom that AI is unregulated is politically expedient for headline-seeking lawmakers, but it is clearly false.
Some civil society groups have other motivations for pushing for AI legislation. It is about slowing down or slowing down the development of AI. Encode Justice, an advocacy group that promotes human-centered AI, co-sponsored SB 1047, a bill introduced by Sen. Weiner that would require strict precautions for AI development in California. Last March, the founder and president of Encode Justice signed an open letter calling for “all AI laboratories to immediately suspend training of AI systems for at least six months.”
Layering new preventive regulations would act like a speed governor for California’s AI industry, slowing development while raising barriers to entry and increasing compliance costs. For anti-AI ideologues, that’s exactly what matters. If lawmakers take this precautionary approach, California will self-destruct its burgeoning AI ecosystem and destroy America’s lead in global AI development.
Andy Jung is deputy general counsel at TechFreedom, a nonprofit, nonpartisan think tank focused on technology law and policy.


