National Standards Technology Campus in Gaithersburg, Maryland. J. Stoughton/Nist. sauce
When ChatGpt first attracted the limited attention of Congress in 2023, many technological policy scholars (including myself) argued that the underlying technology was not entirely new. In fact, generative AI is already covered by a variety of existing laws, including those relating to consumer protection, discrimination, biometric authentication and data privacy.
A glance at the bill introduced at the 117th Parliament (2021-2022) reveals dozens of people dealing with social issues related to generative AI tools. It proposes creating new institutions, advances in privacy protection, improved platform transparency, deepfake labeling, and reform competition. The house held hearings on generator AI tools before 2017 (for example, the dawn of artificial intelligence).
But this more nuanced view – the view that generative AI is just the latest step in the evolution of advanced data processing and machine learning is not profitable for the industry. It doesn’t help to generate headlines or slow down regulations like flashy narratives. This suggests that generative AI is a groundbreaking innovation, so only the CEO of technology can understand how to protect the public and that it could somehow limit America’s competitiveness.
Here we are in a new council and have new leadership. Still, this time instead of the ridiculous “insights forum,” Congressional leadership grabs a doge-inspired sledge hammer. This is a three-page suspension on state AI laws inserted into the budget adjustment package. This is especially troublesome. Because generative AI is not a fundamental break from the past. This is part of a longer trajectory of machine learning and automated decision-making techniques. Framing of “ai” as completely novel is opening the door for moratoriums that risk protecting consumers under long-standing laws that address child safety, privacy, fraud and more.
Some jurists argue that people targeting common state data protection laws and unfair or deceptive practices could survive under a pause, as written. The “Construction Rules” appear to be able to enforce “generally applicable laws” as long as they apply equally to “AI” and “non-AAI tools.”
But here is the problem. The definition of “artificial intelligence” in the law is very broad (“machine-based systems that allow predictions, recommendations, or decisions that affect the real or virtual environment in a particular set of human-defined purposes”). In today’s digital market, many, if not most, products and services include the ability to skirt this definition, making it difficult to think of “features” with “non-AA” alternatives.
Anyway, ambiguity supports monopoly. Openai and the companies that like it have a huge legal team ready to take advantage of all the grey areas. Meanwhile, the state attorney general may have only a handful of lawyers and limited resources to fight back. This power imbalance can even be caused when state AGs, which are supported by all cases related to the digital environment, rely on decades-old consumer protection laws that may not be officially included in the moratorium.
If you need national standards, drop the sledgehammer and build it
Given a comprehensive “AI policy,” it includes transparency, access to researcher training data, risk assessment, auditing and privacy rules obligations. These types of powers of attorney work better and make more sense when implemented at the federal level. However, in the US governance system, if one branch cannot act (as Congress does), other agencies will intervene. In this case, the state legislature began to fill the gap. The nation is effectively a US digital regulator, and for better or worse, Congress will undergo massive reforms related to campaign finance, committee structure, staff capabilities, and more. Now is the time to ask how to get strong AI policies implemented nationwide through states.
One reason I support national standards is that the internet boundaries have softened not only the state lines but also the international boundaries. Until recently, the National Institute of Standards and Technology (NIST) was engaged in the formation of international AI standards. Similarly, the State Department was actively involved in the secure deployment of AI systems in the United Nations’ global digital compact, the digital inclusion on the G7 Hiroshima process, the G20 Maceiómaceió Ministerial Declaration, and several other forums.
I say “until recently.” Because both institutions are currently facing significant reductions and value change. NIST’s international standards work relies heavily on internal research and external partnerships funded through National Science Foundation (NSF) grants. However, recent Doge Cuts has significantly reduced NSF support for AI Ethics Research, and even standard support appears to be declining. For example, during a recent Senate hearing on AI competitiveness, Openai CEO Sam Altman gave a wishful response to the value of NIST standards when asked about the subject by Sen. Maria Cantwell (D-WA).
Senator Maria Cantwell (D-WA):
Do I need NIST to set the standard? If possible, yes or no and just go down the line.
Sam Altman:
I don’t think we need it. It may be useful.
His tepid’s response suggests that Silicon Valley leaders could even withdraw voluntary guidance on standards. This is bad news for NIST, which relies on bipartisan support in Congress. The State Department is not getting much better in this new political environment. The Science and Technology Advisor’s office has recently been eliminated as a standalone office. And the Trump-era State Department no longer prioritizes free expression, LGBTQ+ rights, or gender-based violence in their human rights reporting. This is all important value in the development of sociotechnical AI standards.
There are many reasons why 50 states are legislating the internet and AI is not ideal. Navigating through many jurisdictions can overwhelm your startups and civil society groups, but large tech companies can simply hire more lawyers. However, a less-thought-debated issue is that states lack the above-mentioned diplomatic infrastructure to engage in international technological governance. Nevertheless, they are increasingly referring to the international standards of their laws (as I explained in a recent article).
This lack of diplomatic infrastructure is not an insurmountable issue. Additionally, as other regions (such as the EU) stand ahead of the US on privacy, transparency and AI auditing, American consumers can benefit from international standards referenced in state-level laws. Furthermore, while the federal government has been able to manifest in international negotiations with values and frameworks, states can manifest in laws that require harmony. And laws have far more weight in affecting global standards than abstract principles.
Without a federal government’s resolve to regulate AI, a state of aggressively aggressive technology needs to work together to collaborate on new (or existing) relevances that function like interstate level NIST. This entity can represent the United States, an international standards body, conduct shared research and publish a harmonious framework similar to the NIST AI Risk Management Framework (already referenced).
Of course, many details need to be resolved. How is this entity funded and governed? What processes do you use to harmonize or adjust the law? Which international organizations should be involved and how can they best attract consumers and civil society? There are many paths to building strong and consistent national AI regulations. But dismantling the progress the state has already made is the lazy and most dangerous approach (without providing a new set of regulations).