Newsom’s veto of AI bill was smart choice

Newsom’s veto of AI bill was smart choice

It’s hard to overstate the significance of Gov. Gavin Newsom’s praiseworthy decision to veto what would have been the most far-reaching Artificial Intelligence regulation in the country. The bill would have hobbled this emerging industry and put bureaucrats in charge of a technology that few regulators really understand. The decision has nationwide implications given the entire industry would have essentially been forced to follow California’s lead.

Senate Bill 1047 proceeded from the view that AI is inherently dangerous. The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act would “require that a developer, before beginning to initially train a covered model, as defined, comply with various requirements, including implementing the capability to promptly enact a full shutdown,” per the legislation. It would have created another state regulatory agency.

It’s almost as if lawmakers — who at times made it clear they didn’t understand the nature of the technology — had come up with a magical solution to a problem they saw in a Hollywood movie. Many of us watched the “Terminator” trilogy and recall when the AI defense network SkyNet became self-aware and attacked the human race. No one would bet the California Legislature could outsmart robots.

Although Newsom vowed to pursue other forms of AI regulation, he noted in his lengthy veto message that this bill took the wrong approach: “Key to the debate is whether the threshold for regulation should be based on the cost and number of computations needed to develop an Al model, or whether we should evaluate the system’s actual risks regardless of these factors.” The bill took the former approach.

Newsom also noted 32 of 50 of the world’s top AI companies are based in our state. He complained the bill only targeted large AI applications, whereas those often handle low-risk activities compared to smaller ones. He said it would give the public a false sense of security.

Most problematic, the legislation focused on regulating AI computations and processes. In his critique of European Union AI regulations similar to SB 1047, U.S. Rep. Jay Obernolte, a Republican from Hesperia, explained that by “focusing on mechanisms instead of on outcomes” such regulations threaten “the values of freedom and entrepreneurship” and put bureaucrats in control.

Eight California members of Congress sent a letter to Newsom opposing the law. “Not only is it unreasonable to expect developers to completely control what end users do with their products, but it is difficult if not impossible to certify certain outcomes without undermining the rights of end users, including their privacy rights,” they argued. An opposition letter from tech associations rightly argued the government should regulate misuse not model development.

Related Articles

Editorials |


In praise of Corey DeAngelis, a champion for school choice

Editorials |


Endorsement: Yes on Proposition 34 to check the abuses of the AIDS Healthcare Foundation

Editorials |


Homeless results bill vetoed by Newsom

Editorials |


Insurance plan on right track, despite its foes

Editorials |


Schools search for post-COVID learning gains

The last point is key in any regulation. Government should not be telling companies how to create their products and services — but instead should regulate misuse of them, or what economists call externalities (side effects). We opposed other AI-related bills Newsom signed, but at least they weren’t as disruptive as this one in that they tried to follow the model of stopping misuse (such as so-called “deep fakes.”)

The other key principle in regulation, especially of complex and emerging technologies, is not to crush them in their infancy. We don’t often agree with Newsom, but we appreciate his thoughtfulness on this important decision.

Please follow and like us:
Pin Share