Experts say that different state-level policies (AI) on artificial intelligence (AI) create complex legislative landscapes to navigate, but can also create opportunities.
The federal government has not yet implemented a comprehensive policy to address the development and use of AI, leaving it primarily responsible for implementing its policies. While many AI-related laws are under consideration in Congress, states are not waiting for AI governance to be enacted.
For example, as Nevada’s CIO Timothy Garge told Government Technology, Nevada’s 2025 legislative conference will focus on AI technology.
“Now we are at the start of legislative meetings, so we are actively tracking multiple (invoices) from hardworking lawmakers on AI and generative AI,” Garji said.
Nevada released its AI policy in November, leading its technology use in the state. Many other states now have their own policies aimed at AI, including California, Indiana, New Jersey, Ohio and more.
“We see differences between states, but if a state adopts AI usage policies or laws regarding public sector AI, this creates a lot of clarity and clear rules for all government agencies and all government service providers in the state.”
Having a variety of policies at the state level creates learning opportunities where state and local entities can work together to share knowledge and best practices, Anex Ries cites the Govai Coalition as one such collaborative model.
Several state-level policies, such as Arizona, are more adaptable to designing to support the rapid evolution of AI. This approach could be advantageous in many ways, Anex-Ries said. Technology-independent policies make agencies agile as AI technology changes.
When it comes to government operations, procurement rules and regulations already exist in the states, and as written, often apply to AI, Anex Ries said. In some cases, additional requirements may complement existing frameworks.
“Government service providers are already thinking about how to navigate the landscapes of different states,” Anex-Ries said. He argued that the rise of state-level AI policies would improve the procurement process for both public and private companies.
AI Aaron Pointn, chairman of the American Association for AI board, warned that while there are many opportunities for AI, state houses that are approaching policies individually can create fragmented patchwork of laws. This will encourage businesses to move out of state if they don’t like their policies, he said. Pointn said he believes there is an informal competition between states to attract AI startups through incentives, philosophy and principles.
The impact of federal policy on states
At the federal level, the January executive order rescinded the October 2023 executive order on AI to position the United States as a leader in the AI sector. This federal action could have an impact on state governments.
“I think what we’ve already seen is that states are learning from different approaches at the federal level,” Anex Rise refers to the management and budget office, the federal level agency approach, and the federal AI inventory notes.
State-level policy work has so far primarily sought AI research and the creation of AI inventory, but the latter shows that it is the result of successful federal level AI inventory and bipartisan support. The creation of such resources, he noted, is the basis for state lawmakers to take informational action based on the state’s current AI use cases.
Tara Wisniewski, Vice President of Advocacy, Global Markets and Member Engagement at ISC2, noted that ISC2 member organizations are seeking further guidance on AI use.
“One of the things we regularly advocate is for policymakers to understand and understand the need for regulatory harmony,” Wisniewski said, indicating that the lack of such harmony can create complex compliance environments from a cybersecurity perspective. The challenge is likely to affect smaller companies without the robust cyber teams that large companies often use, she said.
At the 2025 legislative conference, Anex-Ries said it hopes states will recognize AI risks in the public sector and follow the trend of creating legislative proposals for the government. This may include more inventory requirements, risk management guidance, and the appointment of staff to coordinate AI activities.
CDT continues to release analysis as state lawmakers advance AI law. Work on implementing effective AI governance continues, he said: “It is the beginning of the state and local processes, and there are many learning opportunities and collaborations that demand in the future lawmakers, civil society and academics continue to delve into the outcomes of these new policies.”