Close Menu
Versa AI hub
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
  • Resources

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

What's Hot

Join us at the AMD Open Robotics Hackathon

December 26, 2025

PicLumen AI celebrates Christmas with imaginative AI art inspiration for creators in 2025 | AI News Details

December 25, 2025

Google DeepMind’s cutting-edge predictive models

December 25, 2025
Facebook X (Twitter) Instagram
Versa AI hubVersa AI hub
Friday, December 26
Facebook X (Twitter) Instagram
Login
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
  • Resources
Versa AI hub
Home»AI Legislation»Texas’ sweeping AI bill threatens innovation and economic growth
AI Legislation

Texas’ sweeping AI bill threatens innovation and economic growth

By January 6, 2025Updated:February 13, 2025No Comments5 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email

A sweeping artificial intelligence regulation bill introduced in Texas last month would impose the strictest state-level regulations on AI in the United States and threaten to stifle innovation and growth in the state.

Texas Representative Giovanni Capriglione (R) filed HB 1709 (Texas Responsible AI Governance Act, TRAIGA) on December 23rd. The law would impose aggressive AI regulations on all industries and increase compliance costs for Texas companies that use AI.

TRAIGA adopts a risk-based framework for AI regulation, modeled on the European Union’s AI law. The framework classifies AI systems according to their perceived risk level and imposes stricter regulations on systems classified as high risk.

However, TRAIGA, and risk-based frameworks in general, are fundamentally flawed and regulate speculative uses of AI technologies rather than actual societal harm. For example, while TRAIGA prohibits the use of AI to perform social scoring, companies can conduct social scoring through means other than AI. In other words, the bill would penalize the use of AI rather than the underlying harmful activity.

Risk-based frameworks subject entire industries to broad AI regulation, often overlooking sector-specific nuances. As a case in point, TRAIGA imposes burdensome obligations on developers, adopters, and sellers of “high-risk” AI systems and bans the development of certain AI systems altogether.

So far, Congress has not passed any comprehensive legislation regulating or banning the development or use of AI. States have become more proactive in regulating AI, with more than 31 states adopting resolutions or enacting AI legislation last year. But the laws generally target specific areas, regulating activities such as deepfakes in elections and the use of AI in job interviews.

Colorado is the only state to pass a comprehensive AI bill. With the Colorado AI Act enacted in May 2024, the state adopted a risk-based approach similar to the EU AI Act and TRAIGA.

But TRAIGA goes further than Colorado’s AI law. For example, we define a high-risk AI system as “any (AI) system that is a critical component of resulting decision-making.”

Substantive factors are broadly defined as “factors that are taken into account in making a consequential decision.” likely to change the outcome of the resulting decision. and was weighed more heavily than any other factor contributing to the resulting decision. ” There is no further framework or clarity to apply this vague definition.

A consequential decision, on the other hand, is broadly defined as any decision that has a material, legal, or similarly significant impact on a consumer’s access, cost, or conditions to criminal cases and related proceedings, educational admissions, etc. Masu. or opportunities, employment, insurance, financial or legal services, election or voting processes, etc.

These core definitions demonstrate the ambiguity and subjective nature of the bill. Classification as a high-risk AI system depends on whether the system is a “substantive factor in the resulting decision,” and the definition of a “substantial factor” is associated with factors. This recursive dependency prevents either term from being defined independently, leaving both definitions dependent on the other, creating ambiguity.

The vagueness and circular nature of these definitions can give bureaucrats too much discretion to decide which AI systems are high-risk, increasing compliance costs for companies.

TRAIGA imposes obligations on developers, implementers, and distributors of systems deemed high risk. These responsibilities include mandatory risk assessments, record-keeping and transparency measures.

The bill would require AI distributors (individuals (other than developers) who bring AI systems to market) to withdraw, disable, or recall noncompliant high-risk AI systems under certain conditions. are. TRAIGA also requires AI developers to maintain detailed records of their training data, which is a very burdensome requirement given the trillions of data points on which large language models are trained. is.

TRAIGA also prohibits certain AI systems that are said to pose unacceptable risks. This includes manipulating human behavior, performing social scoring, obtaining certain biometric identifiers, inferring sensitive personal attributes, inferring certain emotional recognition, and producing explicit or harmful content. Includes AI systems.

However, blanket bans such as those proposed under TRAIGA risk crushing innovation by banning technologies before the full range of their benefits and risks are understood.

Many of these AI capabilities have immense potential for socially constructive applications, such as improving medical diagnoses, streamlining legal processes, enhancing cybersecurity, and enabling personalized educational tools. .

TRAIGA includes narrow exemptions for small businesses and AI systems in research and testing under its sandbox program, but these carve-outs provide only temporary relief and raise compliance costs across the industry. It does not justify a burdensome regulatory framework.

With TRAIGA’s presence, startups and companies with limited resources can navigate complex compliance issues, assess exemption eligibility, and avoid significant regulatory burdens after emerging from small business status or sandbox programs. You will be forced to prepare. This creates unnecessary barriers to innovation.

Perhaps the most concerning aspect of the bill is that it would pave the way for lawmakers to introduce similar heavy-handed AI regulations across the country. That would be a grave misconception.

Policymakers should instead focus on narrowly tailored laws that directly address specific, real-world harms. For example, if a particular activity is considered inherently harmful, it should be prohibited by law with clear definitions and enforcement mechanisms, including both AI and non-AI implementation methods. This approach ensures that truly harmful uses are addressed without exposing benign AI systems to the same level of oversight and compliance costs.

By prioritizing sector-specific rules, policymakers can protect consumers without impeding technological progress or ceding control of AI to adversaries.

This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law and Bloomberg Tax, or its owners.

Author information

Oliver Roberts is co-head of Holtzman Vogel’s AI practice group and CEO and co-founder of Wikard, a legal AI technology company.

Write to us: Author Guidelines

author avatar
See Full Bio
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleMr. Musk, Mr. Trump and the EU AI law
Next Article Regulation of AI in the Canadian workplace

Related Posts

AI Legislation

Tom Leakes announces “AI Bill of Rights” at Florida State House | “AI Bill of Rights” at Florida State House WNDB

December 24, 2025
AI Legislation

Congress passes new artificial intelligence law

December 23, 2025
AI Legislation

New York State signs AI Safety Act

December 23, 2025
Add A Comment

Comments are closed.

Top Posts

50,000 Copilot licenses acquired for Indian services companies

December 22, 20255 Views

New York Governor Kathy Hochul signs RAISE Act regulating AI safety

December 20, 20255 Views

What do they look like?

February 28, 20255 Views
Stay In Touch
  • YouTube
  • TikTok
  • Twitter
  • Instagram
  • Threads
Latest Reviews

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

Most Popular

50,000 Copilot licenses acquired for Indian services companies

December 22, 20255 Views

New York Governor Kathy Hochul signs RAISE Act regulating AI safety

December 20, 20255 Views

What do they look like?

February 28, 20255 Views
Don't Miss

Join us at the AMD Open Robotics Hackathon

December 26, 2025

PicLumen AI celebrates Christmas with imaginative AI art inspiration for creators in 2025 | AI News Details

December 25, 2025

Google DeepMind’s cutting-edge predictive models

December 25, 2025
Service Area
X (Twitter) Instagram YouTube TikTok Threads RSS
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
© 2025 Versa AI Hub. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.

Sign In or Register

Welcome Back!

Login to your account below.

Lost password?