All over the country, state legislatures are taking action on artificial intelligence.
States such as Maryland have written laws that protect children from manipulated AI-designed content online. In Northern Dakota, Wisconsin, Alabama, and other states, Congress acted to protect voters who were misleading with AI-generated content. Laws such as Kansas, Montana and others protect critical infrastructure and national and cybersecurity interests.
The state is leading the way as our constituents ask us to prevent the worst abuse and threats of AI. Each of these laws, and hundreds like them, were passed by Congress on broadly and bipartisan standards. However, each is currently under threat of nullification due to less meaningful provisions in the settlement bill passed in the US House of Representatives.
As leaders of the Task Force of the State Legislature’s National Conference on Artificial Intelligence, Cybersecurity and Privacy, we have worked with colleagues across the country at the cutting edge of the AI revolution over the past few years. We worked with the industry to understand how these technologies work, their potential to improve our lives, and how malicious actors are energizing them. We used this knowledge to develop targeted, thoughtful, bipartisan laws that protect our members.
Do not anti-infringe the law. All over the country and across the aisles, state legislators are keen on the possibilities of AI and hope that the US will lead the world with this technology. AI believes it can unlock new economic growth, similar to the advent of internal combustion engines and phones. However, even if new technologies present opportunities, it is a policymaker’s duty to think about and address potential shortcomings appropriately.
The preemptive clause in the settlement bill has been frankly saying that for the next decade there is no state enforcement of laws or regulations governing artificial intelligence models, artificial intelligence systems, or automated decision systems. If enacted, many years of careful work by states to adopt laws that balance innovation with consumer protection, there is no federal law governing AI in its place.
Law enforcement officers in our country agree. A bipartisan group of 40 state attorney generals recently said: “The impact of such a widespread moratorium will wipe out and completely destroy reasonable state efforts to prevent known harms associated with AI.
Some of the AI industry understands why they help state laws take precedence. They may argue that innovation suffers under a patchwork of regulations that is not entirely lined up. We have heard that discussion from other industries over the years. We haven’t bought it.
RJ Sangosti/TNS
The truth is, Policy making is a group effort. Our state legislature has built our regulatory structure in the industry. We heard their concerns and balanced every step of the path with the privacy, security and safety needs of our members.
Under our constitutional framework, states are not only permitted, but are obligated to act without federal lawsuits. We have both the right and the responsibility to protect our members through laws that reflect clear local contexts and challenges. A uniform approach must be acquired through joint policy making. This is not simply imposed through the Draconian’s decade-long freeze.
When the Senate takes up the settlement package, we urge them to remove the provisions of the preemptive clause and continue to lead the way while protecting risks and accepting AI potential.
Thomas C. Alexander is the Speaker of the South Carolina Senate. Jackie Irwin is a member of the California Congress. They co-chair the task force of the state legislature’s national meeting on artificial intelligence, cybersecurity and privacy.
The Governing Opinion column reflects the views of the author and is not necessarily the views of Governing editors or management.