A sweeping artificial intelligence regulation bill introduced in Texas last month would impose the strictest state-level regulations on AI in the United States and threaten to stifle innovation and growth in the state.
Texas Representative Giovanni Capriglione (R) filed HB 1709 (Texas Responsible AI Governance Act, TRAIGA) on December 23rd. The law would impose aggressive AI regulations on all industries and increase compliance costs for Texas companies that use AI.
TRAIGA adopts a risk-based framework for AI regulation, modeled on the European Union’s AI law. The framework classifies AI systems according to their perceived risk level and imposes stricter regulations on systems classified as high risk.
However, TRAIGA, and risk-based frameworks in general, are fundamentally flawed and regulate speculative uses of AI technologies rather than actual societal harm. For example, while TRAIGA prohibits the use of AI to perform social scoring, companies can conduct social scoring through means other than AI. In other words, the bill would penalize the use of AI rather than the underlying harmful activity.
Risk-based frameworks subject entire industries to broad AI regulation, often overlooking sector-specific nuances. As a case in point, TRAIGA imposes burdensome obligations on developers, adopters, and sellers of “high-risk” AI systems and bans the development of certain AI systems altogether.
So far, Congress has not passed any comprehensive legislation regulating or banning the development or use of AI. States have become more proactive in regulating AI, with more than 31 states adopting resolutions or enacting AI legislation last year. But the laws generally target specific areas, regulating activities such as deepfakes in elections and the use of AI in job interviews.
Colorado is the only state to pass a comprehensive AI bill. With the Colorado AI Act enacted in May 2024, the state adopted a risk-based approach similar to the EU AI Act and TRAIGA.
But TRAIGA goes further than Colorado’s AI law. For example, we define a high-risk AI system as “any (AI) system that is a critical component of resulting decision-making.”
Substantive factors are broadly defined as “factors that are taken into account in making a consequential decision.” likely to change the outcome of the resulting decision. and was weighed more heavily than any other factor contributing to the resulting decision. ” There is no further framework or clarity to apply this vague definition.
A consequential decision, on the other hand, is broadly defined as any decision that has a material, legal, or similarly significant impact on a consumer’s access, cost, or conditions to criminal cases and related proceedings, educational admissions, etc. Masu. or opportunities, employment, insurance, financial or legal services, election or voting processes, etc.
These core definitions demonstrate the ambiguity and subjective nature of the bill. Classification as a high-risk AI system depends on whether the system is a “substantive factor in the resulting decision,” and the definition of a “substantial factor” is associated with factors. This recursive dependency prevents either term from being defined independently, leaving both definitions dependent on the other, creating ambiguity.
The vagueness and circular nature of these definitions can give bureaucrats too much discretion to decide which AI systems are high-risk, increasing compliance costs for companies.
TRAIGA imposes obligations on developers, implementers, and distributors of systems deemed high risk. These responsibilities include mandatory risk assessments, record-keeping and transparency measures.
The bill would require AI distributors (individuals (other than developers) who bring AI systems to market) to withdraw, disable, or recall noncompliant high-risk AI systems under certain conditions. are. TRAIGA also requires AI developers to maintain detailed records of their training data, which is a very burdensome requirement given the trillions of data points on which large language models are trained. is.
TRAIGA also prohibits certain AI systems that are said to pose unacceptable risks. This includes manipulating human behavior, performing social scoring, obtaining certain biometric identifiers, inferring sensitive personal attributes, inferring certain emotional recognition, and producing explicit or harmful content. Includes AI systems.
However, blanket bans such as those proposed under TRAIGA risk crushing innovation by banning technologies before the full range of their benefits and risks are understood.
Many of these AI capabilities have immense potential for socially constructive applications, such as improving medical diagnoses, streamlining legal processes, enhancing cybersecurity, and enabling personalized educational tools. .
TRAIGA includes narrow exemptions for small businesses and AI systems in research and testing under its sandbox program, but these carve-outs provide only temporary relief and raise compliance costs across the industry. It does not justify a burdensome regulatory framework.
With TRAIGA’s presence, startups and companies with limited resources can navigate complex compliance issues, assess exemption eligibility, and avoid significant regulatory burdens after emerging from small business status or sandbox programs. You will be forced to prepare. This creates unnecessary barriers to innovation.
Perhaps the most concerning aspect of the bill is that it would pave the way for lawmakers to introduce similar heavy-handed AI regulations across the country. That would be a grave misconception.
Policymakers should instead focus on narrowly tailored laws that directly address specific, real-world harms. For example, if a particular activity is considered inherently harmful, it should be prohibited by law with clear definitions and enforcement mechanisms, including both AI and non-AI implementation methods. This approach ensures that truly harmful uses are addressed without exposing benign AI systems to the same level of oversight and compliance costs.
By prioritizing sector-specific rules, policymakers can protect consumers without impeding technological progress or ceding control of AI to adversaries.
This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law and Bloomberg Tax, or its owners.
Author information
Oliver Roberts is co-head of Holtzman Vogel’s AI practice group and CEO and co-founder of Wikard, a legal AI technology company.
Write to us: Author Guidelines