Baltimore, Maryland — Artificial intelligence development is moving faster at a pace, and courts need to catch up faster, according to a New York area lawyer.
Matthew D. Kohel, partner at Saul Ewing LLP, and Michele Gilman, law professor at Baltimore University of Law, address issues of AI applications responsibility across healthcare and employment.
According to Gilman, the US is lagging behind its AI-regulated counterparts. “Because the United States does not have comprehensive AI regulations, we must focus on traditional doctrines such as tort, contracts, and other to protect consumers from harm,” she said.
In contrast, the European Union has already enacted a wide range of AI-related laws.
Pending cases
Kohel, who advises businesses on AI-related issues, warns that liability is increasing the chain. Therefore, AI deployers should proceed with caution. Kohel refers to a federal class action lawsuit against Workday, a cloud-based HR platform.
“The lawsuit alleges discrimination through screening algorithms for system applicants,” Kohel said.
Plaintiff Derek Mobley, a black man over the age of 40 with anxiety and depression, was applied to more than 100 jobs through an applicant screening system with workdays. He was denied every time despite him being qualified for the role. A few rejections arrived overnight, and concluded that his resume had not been screened manually and that the algorithm was systematically screening him.
Kohel also refers to cases where candidates are rejected until they fill out the application at a younger age, which led to claims of identification of the algorithm.
Kohel said the state might turn to New York as a national model.
“NYC Local Law 144, also known as the Automated Employment Decision Tools Act, is a model that governs the use of automated tools in employment and promotion decisions in New York City,” Kohel said. “This law comes into effect in the summer of 2023 and requires an independent bias audit of employment tools,” he said.
Employment and AI
Gilman quotes another case, but this time it relates to AI and housing.
“In a fair housing discrimination case against developers of tenant screening algorithms, the court ruled that the developers were not liable as contracts and marketing materials revealed that downstream users, such as landlords, were responsible for the outcome of the tool.
Gilman emphasized the importance of creating AI laws that prioritize affected populations.
“Private companies should not be arbitrators of what these laws look like, and they also need to prevent lobbyist-led laws,” she said.
So how are employers supposed to protect themselves? Gilman said the Employment Opportunity Equality Commission issued guidance to employers on the use of AI, but the Trump administration has since revoked it.
“But existing federal and state anti-discrimination laws still apply,” she said.
Healthcare and AI
Gilman said the court will sort out healthcare and AI liability over the next few years.
“The problem is complicated, especially due to the ‘black box’ nature of AI tools. So, because they are so complex, sometimes your developers can’t always explain how a particular outcome is produced,” she said.
Gilman warns against over-regulation that could hinder innovation, but she insists that human surveillance must remain central.
“Health professionals should be responsible for diagnosis and treatment. AI is a supplementary tool and does not replace human judgment. Patients should be notified if AI is used,” she said.
The road ahead
All experts agree with the need for certain robust laws governing AI.
“AI developers, deployers, and users are all responsible for the harms that AI has generated and for the laws. They need to recognize the new forms of harm that AI has created. At the end of the day, AI does not have the tools humans and businesses use.

