The EU is making it easier for individuals to sue manufacturers of artificial intelligence systems that harm them, as part of the bloc’s broader push for consumer-friendly technology laws.
The new rules, which complement the European Union’s AI law, aim to ensure that “no matter what happens, there will always be companies in the EU who are responsible”, says Clifford Chance’s Continental Europe correspondent. said Alexander Kennedy, Director of Technical Knowledge.
On December 8, the European Union updated its Product Liability Directive, a roadmap that the EU’s 27 member states must follow when enacting their own laws. This update ensures that technology products are covered and provides certain harmed individuals with the means to seek compensation. We also clarify where responsibility lies. It is the manufacturer’s responsibility to manage and take responsibility for the continuous updates of the product.
Barry Scannell, a partner at William Fry in Dublin, said the directive would streamline the process for individuals considering litigation. “You no longer need to prove negligence, you just need to prove that the product was defective and caused the damage.”
Only damages to individuals and their property are covered. These include physical damage to persons or property. For example, a program that monitors physical infrastructure may fail and flood a building. Emotional harm – for example, a chatbot that induces users to self-harm. and digital data loss. This rule does not help companies sue.
The Regulation will enter into force by December 9, 2026, after elements have been incorporated into each EU Member State’s national law.
The EU’s efforts to set rules for artificial intelligence reflect a proactive approach to regulation. This is in sharp contrast to the United States, where Congress has taken virtually no action, leaving federal agencies, a handful of state legislatures, and the courts to pave the way.
Most of the key players in this field are American, but in the absence of meaningful legislation in Congress, European AI laws are currently the closest thing to a global standard regulating the use of the technology.
technology reaction
The directive does not address some of the big AI risks that have worried policymakers for the past few years, such as workplace bias. The EU’s landmark AI law, approved earlier this year, mandates safeguards against high-risk uses of AI, but does not focus on liability. The EU is currently working on further legislation, the AI Liability Directive, to answer further questions about who is responsible when AI harms people.
The tech industry, which has urged the EU to approach liability with caution, warned that the impact of the bill remains unclear.
“This is especially true when it comes to the interpretation of the concept of defect and the application of the reduced burden of proof,” Marco Leto Barone, policy director at the Information Technology Industry Council, said in a statement. ITI represents many of the world’s largest technology companies, including AI giants such as Alphabet, Microsoft, and Meta.
He said the EU should consider withdrawing the proposed additional AI liability directive, as EU countries’ liability laws are already complete. He also warned that increasing liability risks for developers would “make investments in AI innovation more complex and expensive, and generally undermine the EU’s competitiveness.”
Impact on companies
One of the most important changes in the latest Directive is that manufacturers will now be responsible for their products on an ongoing basis, not just the moment they put them on the market.
Companies will need to closely monitor the software and AI they design. Under the new rules, not only could you be held liable for something being defective when it was placed on the market, but “you could actually be held liable for failing to update or monitor risks.” said Desislava Savova, who heads the global consumer goods and retail industry. Clifford Chance’s European Sector Group and Technology Sector Group.
For non-EU-based businesses trying to understand whether their company is affected, it’s more of a yes or no answer, says Simons & Simmons, a partner in London who focuses on product liability disputes. says David Kidman, who is guessing.
On the one hand, EU product law aims to give claimants someone to sue within Europe. But companies should also look at the bigger picture, Kidman said, adding: “The EU is seriously working on consumer-friendly legislation around AI.”
“It must be remembered that the main reason for this review was the concern that claimants could bring claims against AI producers and other companies in the supply chain.” This approach , which is also reflected in other EU legislation, including the AI Act. “When you start piecing together all of these different laws and the sentiment behind them, I think it’s important to not get too complacent, whether you’re an AI producer or someone else in the supply chain,” Kidman said. Masu.”
As companies await further AI liability legislation and analyze what the updated product liability rules mean for them, they should closely review their contracts to ensure they are watertight. advisors said.
“What we’re talking about here is customer representations, warranties, liability terms, limitations of liability, exclusions of liability, and indemnifications. These are all very important.” And in most organizations, these are Not updated frequently. said Scannell.
Companies should now rethink how they approach AI contracts, he added. “The end user license agreement, the terms of service, the contractual protections, all of that has changed. Have you responded to that?”