Sen. Cynthia Lumith (R-WY) introduced a bill on Thursday that would protect AI developers from certain types of civil liability lawsuits if they publish design specifications for the model.
Specifically, the Responsible Innovation and Safe Expertise (RISE) Act makes it clear that experts, such as doctors, lawyers, and other experts use their practice that financial advisor specialists have the legal responsibility to exercise due diligence and verify the output of the system.
“This law does not create a comprehensive immunity for AI. In fact, AI developers must publish model specifications, allowing them to make informed decisions about the AI tools that experts choose to utilize.” “It also means that licensed experts will be responsible for the advice and decisions they will ultimately make.”
Responsibility for adverse outcomes when AI tools are used in professional contexts was a vague area adopted by various states employing different standards. The upward law would create a single federal standard.
“AI is transforming specialized industries, including medicine, law, engineering and finance, and these tools are increasingly being used in key decision-making processes that affect millions of Americans,” said Lumis’ statement. “Current rules of responsibility create barriers to innovation by exposing AI developers to legal risks, even when their tools are being used responsibly by licensed professionals trained in their specialties.”
Read more: Seattle considers banning rent setting algorithms amid allegations of conspiracy
The bill, separate from the controversial provisions of one big, beautiful budget bill pending in the Senate in the Senate to impose a 10-year suspension on states that enact or enforce AI, but it enacts or enforces laws regulating AI.
The moratorium provision prohibits the placement of AI and “legal obstacles” that include design, performance, liability, and documents, such as scores, statistical modeling, data analysis, or computational processes derived from machine learning, statistical modeling, data analysis, or artificial intelligence that issues scores, classifications, or recommendations.
Lummis’ bills differ in that they preempt state efforts rather than state designs that legislate responsibility for the use of such systems. Unlike the provisions of the Budget Bill, Lummis bill limits the relief provided to AI developers for the use of the product by placing licensed experts and conditions for their relief.
“Developers can assert the immunity of a safe retainer only if they publish model cards and key design specifications so that doctors, lawyers, engineers and other experts can understand what AI can and cannot do before relying on decisions,” her statement said.
It also does not affect the liability of other AI applications, such as self-driving cars.
Lummis is a member of the Commerce Committee on Consumer Protection, Technology, and Data Privacy, and has jurisdiction to consider the bill.