Close Menu
Versa AI hub
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
  • Resources

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

What's Hot

Governance becomes a priority as AI agents take on more tasks

April 6, 2026

Compact multimodal intelligence for corporate documents

April 5, 2026

Google’s new open model based on Gemini 2.0

April 5, 2026
Facebook X (Twitter) Instagram
Versa AI hubVersa AI hub
Monday, April 6
Facebook X (Twitter) Instagram
Login
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
  • Resources
Versa AI hub
Home»AI Legislation»Experts discuss who will pay when AI doesn’t work
AI Legislation

Experts discuss who will pay when AI doesn’t work

versatileaiBy versatileaiAugust 26, 2025No Comments4 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
#image_title
Share
Facebook Twitter LinkedIn Pinterest Email
Please listen to this article

Baltimore, Maryland — Artificial intelligence development is moving faster at a pace, and courts need to catch up faster, according to a New York area lawyer.

Matthew D. Kohel, partner at Saul Ewing LLP, and Michele Gilman, law professor at Baltimore University of Law, address issues of AI applications responsibility across healthcare and employment.

According to Gilman, the US is lagging behind its AI-regulated counterparts. “Because the United States does not have comprehensive AI regulations, we must focus on traditional doctrines such as tort, contracts, and other to protect consumers from harm,” she said.

In contrast, the European Union has already enacted a wide range of AI-related laws.

Pending cases

Kohel, who advises businesses on AI-related issues, warns that liability is increasing the chain. Therefore, AI deployers should proceed with caution. Kohel refers to a federal class action lawsuit against Workday, a cloud-based HR platform.

“The lawsuit alleges discrimination through screening algorithms for system applicants,” Kohel said.

Plaintiff Derek Mobley, a black man over the age of 40 with anxiety and depression, was applied to more than 100 jobs through an applicant screening system with workdays. He was denied every time despite him being qualified for the role. A few rejections arrived overnight, and concluded that his resume had not been screened manually and that the algorithm was systematically screening him.

Kohel also refers to cases where candidates are rejected until they fill out the application at a younger age, which led to claims of identification of the algorithm.

Kohel said the state might turn to New York as a national model.

“NYC Local Law 144, also known as the Automated Employment Decision Tools Act, is a model that governs the use of automated tools in employment and promotion decisions in New York City,” Kohel said. “This law comes into effect in the summer of 2023 and requires an independent bias audit of employment tools,” he said.

Employment and AI

Gilman quotes another case, but this time it relates to AI and housing.

“In a fair housing discrimination case against developers of tenant screening algorithms, the court ruled that the developers were not liable as contracts and marketing materials revealed that downstream users, such as landlords, were responsible for the outcome of the tool.

Gilman emphasized the importance of creating AI laws that prioritize affected populations.

“Private companies should not be arbitrators of what these laws look like, and they also need to prevent lobbyist-led laws,” she said.

So how are employers supposed to protect themselves? Gilman said the Employment Opportunity Equality Commission issued guidance to employers on the use of AI, but the Trump administration has since revoked it.

“But existing federal and state anti-discrimination laws still apply,” she said.

Healthcare and AI

Gilman said the court will sort out healthcare and AI liability over the next few years.

“The problem is complicated, especially due to the ‘black box’ nature of AI tools. So, because they are so complex, sometimes your developers can’t always explain how a particular outcome is produced,” she said.

Gilman warns against over-regulation that could hinder innovation, but she insists that human surveillance must remain central.

“Health professionals should be responsible for diagnosis and treatment. AI is a supplementary tool and does not replace human judgment. Patients should be notified if AI is used,” she said.

The road ahead

All experts agree with the need for certain robust laws governing AI.

“AI developers, deployers, and users are all responsible for the harms that AI has generated and for the laws. They need to recognize the new forms of harm that AI has created. At the end of the day, AI does not have the tools humans and businesses use.

author avatar
versatileai
See Full Bio
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleAI Art Generation Using Primo Models: Visual Storytelling Transformation for 2024 | AI News Details
Next Article How to write an effective prompt
versatileai

Related Posts

AI Legislation

Down arrow button icon

February 20, 2026
AI Legislation

Lawmakers ask GAO to review state and federal AI regulations

February 19, 2026
AI Legislation

Florida Legislature promotes K-12 AI Bill of Rights

February 19, 2026
Add A Comment

Comments are closed.

Top Posts

We had Claude fine-tune our open source LLM

December 5, 202513 Views

Faster Text Generation with Self-Speculative Decoding

February 13, 202512 Views

Build a great dataset for video generation

February 12, 202512 Views
Stay In Touch
  • YouTube
  • TikTok
  • Twitter
  • Instagram
  • Threads
Latest Reviews

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

Most Popular

We had Claude fine-tune our open source LLM

December 5, 202513 Views

Faster Text Generation with Self-Speculative Decoding

February 13, 202512 Views

Build a great dataset for video generation

February 12, 202512 Views
Don't Miss

Governance becomes a priority as AI agents take on more tasks

April 6, 2026

Compact multimodal intelligence for corporate documents

April 5, 2026

Google’s new open model based on Gemini 2.0

April 5, 2026
Service Area
X (Twitter) Instagram YouTube TikTok Threads RSS
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
© 2026 Versa AI Hub. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.

Sign In or Register

Welcome Back!

Login to your account below.

Lost password?