‘Unregulated AI’ is a myth

‘Unregulated AI’ is a myth

California is the global leader in artificial intelligence. Thirty-five of the top fifty AI companies are headquartered here, and the state accounts for a quarter of all AI patents, conference papers, and companies globally. Yet, unfounded fears over “unregulated” AI threaten to dampen the state’s techno-dynamism.

In reality, AI is already regulated — especially in California. Yet, just this year, state lawmakers have introduced dozens of new AI-focused bills to fill the imaginary regulatory void. If lawmakers overdo it, California will lose its lead on AI development.

In 2018, California enacted SB 1001, which requires businesses and individuals to disclose when and how they use AI systems like chatbots. Enacted in 2019, SB 36, requires state criminal justice agencies to evaluate potential biases in AI-powered pretrial tools. Last October, California enacted AB 302, which mandates a thorough inventory of all “high-risk” AI systems “that have been proposed for use, development, or procurement by, or are being used, developed, or procured” by the state.

A litany of state and federal laws apply to AI as well. The California Consumer Privacy Act, which governs how businesses collect and manage consumer data, secures privacy rights for Californians, such as a “right to know” the data businesses collect, a “right to correct” inaccurate information, and a right to request businesses delete personal information. These privacy rights extend to AI. For example, AI companies must inform California consumers about the personal information they collect and how they use the data.

The CCPA also vests a state agency, the California Privacy Protection Agency, with authority to enforce privacy regulations and implement new ones. The agency is already taking action on AI. On March 9, it voted 3-2 to move forward with drafting new regulations governing how businesses use AI. These would apply to companies with more than $25 million in annual revenue and companies processing the personal data of more than 100,000 Californians.

The proposed regulations would require companies to notify consumers about AI and allow them to opt out of using it. If a consumer opts in, the company must provide explanations, upon request, about how the AI uses personal information. The draft rules would also expand risk assessment requirements for AI systems.

California law already governs a large swath of AI use cases. Federal law covers many of the rest. Urged on by the Biden administration, federal agencies are hard at work regulating AI. Last April, officials from the Federal Trade Commission, Department of Justice, Consumer Financial Protection Bureau, and Equal Employment Opportunity Commission released a joint statement outlining the agencies’ strategies for applying existing laws and regulations to AI.

The FTC has repeatedly stated that “there is no AI exemption from the laws on the books.” The Commission’s authority to police unfair and deceptive trade practices and unfair methods of competition extends to AI, allowing the agency to protect consumers across the country, including Californians, from a wide range of AI-related harms.

In December, the FTC banned Rite Aid from using AI facial recognition technology for five years after the chain deployed biased surveillance systems in stores located in major cities. The FTC is currently studying AI voice cloning technologies and recently proposed a new regulation prohibiting AI-generated deep fakes of individuals. The rule could go so far as to hold AI platforms liable if they “know or have reason to know [their AI] is being used to harm consumers through impersonation.”

Despite these existing state and federal measures, lawmakers continue to stoke fears over a so-called “AI legislation void.” Last December, California Assemblymember Ash Kalra, D-San Jose, vowed to protect the public against “unregulated AI.” And in February, California Senator Scott Wiener, D-San Francisco, fretted that “California’s government cannot afford to be complacent” on AI regulation.

But AI is regulated, and California isn’t complacent. The myth that AI is unregulated is politically convenient for lawmakers jostling for headlines, but it’s demonstrably false.

Related Articles

Opinion |


Is ‘uberveillance’ coming for us all?

Opinion |


The Constitution’s unequal protection of voters across America

Opinion |


Easter and the triumph over worldly power

Opinion |


Legal or not, weed business not easy

Opinion |


The League of California Cities’ war on taxpayers

Certain civil society groups have a separate motivation for pushing AI legislation: to slow, or decelerate, AI development. Encode Justice, an advocacy group “advancing human-centered AI,” co-sponsored SB 1047, a bill introduced by Sen. Wiener mandating strict precautionary measures for AI development in California. Last March, the founder and president of Encode Justice signed an open letter calling on “all AI labs to immediately pause for at least 6 months the training of AI systems.”

Layering on new prophylactic regulation would work like a speed governor on California’s AI industry, slowing down development while raising barriers to entry and increasing compliance costs. For anti-AI ideologues, that’s exactly the point. If lawmakers embrace this precautionary approach, California will self-sabotage its burgeoning AI ecosystem, destroying the U.S.’s edge in global AI development.

Andy Jung is associate counsel at TechFreedom, a nonprofit, nonpartisan think tank focused on technology law and policy.

Leave a Reply

Your email address will not be published. Required fields are marked *