A bitter new fight over state-level AI legislation has erupted in the United States, following California’s veto of a controversial initiative last September.
The bill in question is the Texas Responsible AI Governance Act (TRAIGA), which was formally introduced by Republican Texas Representative Giovanni Capriglione just before Christmas. The bill would outlaw some uses of AI and impose heavy compliance obligations on both developers and adopters of “high-risk” AI systems that “become an important factor in the resulting decision.” It will be.
The bill, also known as HB 1709, differs from California’s AI bill, which focuses on addressing the more theoretical and catastrophic risks to life and property posed by AI, and prevents discrimination against people using AI. are primarily concerned with doing.
TRAIGA will have a significant impact on the adoption of AI systems in Texas, the world’s 8th largest economy, especially when it comes to use cases such as talent recruitment. (However, grants will also be created for Texas-based AI companies and local community colleges and high schools to help train their employees how to use AI.)
Critics have warned that HB 1709 could effectively apply to AI developers outside of Texas and outlaw models like OpenAI’s GPT-4o. Some argue that TRAIGA embodies the dangers of national AI regulation. (There is no federal AI law in the United States, and it is unclear whether such ideas will advance under the second Trump administration.)
“The Texas AI bill is exactly the state-level overreach that America needs to avoid. Federal and state anti-discrimination laws already apply to AI, so this measure would effectively add a third layer of regulation. ” said Hodan Omar, senior policy manager at the Information Technology Innovation Foundation (ITIF), a D.C. think tank. Alphabet, Microsoft, etc.
“These are broad, systemic measures that should be handled at the federal level, if necessary,” Omar added. “A patchwork of national mandates like (TRAIGA) risks derailing a coherent national approach and slowing the important progress the nation is making toward an integrated and effective AI strategy. ”
TRAIGA is similar to the Colorado AI Act, the most comprehensive state-level AI bill to date. The law was passed last year and is scheduled to come into force in February 2026. These bills likely draw heavily from the EU’s AI in their approach. The law was passed last year as well.
The fate of HB 1709 will be decided quickly, thanks to Texas’ unusual legislative system in which bills are only considered from January to June in odd-numbered years. Mr. Capriglione (who did not respond to interview requests) suggested that TRAIGA should enter into force in early September.
The Texas Attorney General will be the law enforcement officer and can impose fines of up to $200,000 per violation, plus administrative fines of up to $40,000 per day for those who ignore TRAIGA. The level of these fines is a significant increase from an earlier draft of the bill, which was released before it was formally introduced.
Prohibitions and restrictions
The law would ban the use of AI to manipulate or deceive people without their knowledge, classify them into “social scoring” systems, or infer racial or sexual characteristics from biometric information. It turns out. It would also be prohibited to use AI to identify people based on images obtained from the internet or other public sources.
AI systems that can create sexual deepfakes would also be banned. This factor concerns even some experts who primarily support TRAIGA, such as Matt Scherer, senior policy advisor at the Center for Democracy and Technology (CDT), who argues that the provision raises free speech concerns. I am doing it.
Under TRAIGA, developers, distributors, and adopters of AI models (with an exemption for small businesses) must use “reasonable care” to protect consumers from the risk of intentional or unintentional algorithmic discrimination. and communicate the model’s limitations and risks to adopters. . The developers also provided metrics regarding the model’s “accuracy, explainability, transparency, reliability, and security,” as well as the steps it has taken to “test the suitability of data sources and prevent unlawful discriminatory bias.” details must be provided to the adopter. Datasets used to train these models.
AI developers will need to keep “detailed records” of their training data. This appears to exceed the EU’s AI law, which is currently the most important comprehensive AI law in the world. The law only requires AI companies to provide a summary of their training data.
If a developer discovers that their model does not comply with the law in any way, they should immediately withdraw or disable the model, if necessary, to avoid violating the law. Similarly, if adopters become aware of a risk of algorithmic discrimination, they should discontinue use of the AI system and notify developers and sellers. Additionally, if a model poses a risk of algorithmic discrimination, “deceptive manipulation or coercion of human behavior,” or the unlawful use or disclosure of personal data, developers will investigate the issue and report it to the Texas Attorney General. need to.
Unlike California’s failed SB 1047 bill, TRAIGA imposes these obligations on developers of all sizes, not just those training the largest and most capable models.
Adopters of high-risk AI systems must perform (or have someone perform) an impact assessment within 90 days of each year when there is an “intentional and significant change” to the system.
“For frontier language models, such changes occur on an approximately monthly basis, so both developers and implementers using such systems are expected to continually create and update these compliance documents. “Yes,” wrote Dean W. Ball, a researcher at George Mason. University Mercatus Center, last week’s blog post.
TRAIGA urges those deploying consumer-facing “high-risk” AI systems to clearly communicate to consumers that they are interacting with AI, and to provide guidance on how AI can help make important decisions about consumers. It is required to explain whether it can be a “substantive element”. Social media companies will need to prevent advertisers from introducing AI systems onto their platforms that could expose users to algorithmic discrimination.
Consumers will have the right to challenge any resulting AI-driven decisions that negatively impact their health, safety, or fundamental rights. They also have the right to know if and how their personal data is being used by AI systems. But like Colorado’s AI law, the Texas bill does not give consumers the right to sue individually for violations.
new regulator
TRAIGA will also consider creating a Texas AI Council, housed in the governor’s office and made up primarily of ordinary people with relevant expertise.
The group will seek to find ways that AI can make state governments more efficient and identify laws and regulations that can be reformed to facilitate the development of AI. Standards for ethical AI development could be issued, requiring technology companies to “investigate and assess their impact on other companies, and the existence or use of tools and processes designed to censor competitors or users.” become able to make decisions.
Ball said TRAIGA itself would lead to “massive censorship of generative AI,” which would also impede AI development and make the Texas AI Council’s powers “ridiculous.” He dismissed the bill as “a perfect example of the ‘Brussels effect’, where Europe’s tendency to early and deep regulation causes other countries to adapt to European standards simply by virtue of institutional momentum.”
CDT’s Scherer disagreed, arguing that TRAIGA is “not as broad or burdensome” as the EU’s AI law.
In fact, Scherer argues that TRAIGA should be more stringent than it currently is. He said an earlier draft of the bill targeted AI systems as “contributing factors” to the resulting decision, in accordance with Colorado law, but the formally proposed version only refers to “substantive factors.” He pointed out that
“That definition allows companies to ignore the law by simply assigning humans to rubber-stamp algorithmic ‘recommendations,'” Scherer said. “That’s exactly what happened with New York City’s AI adoption bill. The content of the remaining provisions regarding AI-driven decision-making doesn’t really matter as long as that loophole exists.
“We hope there is still time to close this and other loopholes before the bill is debated.”
This article originally appeared on Fortune.com