California Governor Gavin Newsom recently vetoed a bill seen as a potential blueprint for a national AI law. Mr. Newsom’s veto should serve as a national model for dealing with similar regulations.
The National Conference of State Legislatures is tracking more than 450 AI-related bills, and many states are facing or will soon face similar challenges. California has 40 AI-related bills on the legislative agenda.
That’s more than nine AI bills per state. These laws range from Alaska’s failed bill to create an AI task force to Rhode Island’s bill aimed at limiting AI emissions. Other laws related to the use of deepfakes and fake audio, elections, workforce considerations, and various aspects of daily life.
What is clear is that we are likely to soon face a national patchwork of AI-related laws that can pose daunting compliance challenges for even the largest AI developers and users. is. Businesses, non-profit organizations, educational institutions, etc. may be subject to regulation because of their presence in a jurisdiction. However, long-standing laws such as the European Union’s General Data Protection Regulation and the California Consumer Privacy Act, which are specifically enacted to regulate data privacy rather than AI, seek to create extraterritorial rights, making this even more It can be complicated. Application of state law.
Newsom cited concerns that “the very innovation that promotes progress in the public interest will be stifled.” These concerns should resonate with every governor and every state legislature.
This is not to say that AI should remain unregulated. Smart AI regulation has several important characteristics.
First, we will not enact AI-related laws in a given region by duplicating existing laws. Laws written to apply only to humans may need to be changed to ensure they apply to AI systems and their users.
Second, AI regulations should be embedded within other regulations on similar topics. If you need AI-specific work rules, they should be included in the work rules so they can be easily found in this section and updated in tandem with similar rules.
Third, AI regulations should be written to apply only to the use of the technology. Attempts to regulate products that do not specify their development or use risk impacting free speech, undermining technological development, and forcing companies out of the market. For example, European Union regulations prevent companies from releasing new technology within its market.
Finally, the law should avoid extraterritorial scope. Extraterritorial laws can cause confusion and compliance difficulties for companies facing conflicting laws regarding the same conduct. In the United States, these laws may also violate the constitutional assignment of state regulation to the federal government.
Newsom said “adaptability is critical as we rush to regulate technologies that are still in their infancy” and protect the state’s “pioneers in one of the most important technological advances in modern history.” He said he acted for the purpose.
The “32 of the world’s 50 largest aluminum companies” that Mr. Newsom identified in California may not exist in other states, but it is clear that damage to the industry must be avoided. is. This is an area where Newsom’s actions demonstrate that transparent, bipartisan cooperation is possible to protect national AI capabilities.
Jeremy Straub is the director of North Dakota State University’s Institute for Cybersecurity Education and Research.