Some top law firms are turning to artificial intelligence experts to enhance their compliance practices in ways that more established legal fields cannot.
They are hiring data scientists and technologists to test customer systems for bias, ensure compliance with new regulations, and reimagine their legal services; may be enhanced by the use of
This emerging field has consumed people’s imaginations over AI’s often lifelike behavior, but it has also raised potential legal hurdles.
“Legal and technical issues are closely intertwined, and when we started this practice more than five years ago, we knew that to truly be an AI practice, we needed a legal and computational understanding. ” said Danny Tobey, partner and global co-owner. -Head of DLA Piper’s AI and Data Analytics practice.
Unlike other areas of law, such as environmental regulations or automotive safety, where legal professionals deal with complex details on a daily basis, AI poses unique challenges that require the expertise of engineers, Tobey said. said.
“AI is unique because we are not only talking about incredibly complex and novel technologies that are developing every day, but are also simultaneously rewiring the infrastructure of how we practice law. Because there are,” Tobey said in an interview. “True AI practices combine both legal and computational skill sets.”
DLA Piper is one of many multinational companies adopting this strategy. Faegre Drinker has a subsidiary called Tritura that employs data scientists and advises clients on the use of AI, machine learning and other algorithmic technologies, according to the company’s website. DLA Piper, which employs 23 data scientists, confirmed it hired 10 data scientists from Faegre Drinker last year.
Faegre Drinker did not respond to an email requesting comment.
Some companies are hiring engineers to incorporate AI into their operations.
Last year, A&O Sherman announced an AI tool called Harvey, built using OpenAI’s ChatGPT platform, that can “automate and enhance various aspects of legal work, including contract analysis, due diligence, litigation, and regulatory compliance.” announced.
Clifford Chance announced in February that it had introduced an internal AI tool called Clifford Chance Assist, developed on Microsoft’s Azure OpenAI platform. The company said the tool will be used to automate daily tasks and improve productivity.
“Our team of legal technologists in the U.S. and around the world is evaluating how automation and AI solutions can help us as legal professionals,” Inna Jackson, Americas technology and innovation attorney at Clifford Chance, said in an interview. We are considering it,” he said.
Red team and governance
To help customers determine whether their AI models work within regulations and laws, DLA Piper regularly employs so-called red teams. This is a technique in which personnel simulate attacks on physical or digital systems to see how they work.
“We are working with major retailers to test a variety of facial recognition solutions to ensure they not only deliver on their technical promise, but also are legally compliant and follow the latest announcements from federal agencies. and AI-related legislation,” Toby said.
He noted that companies are also rapidly implementing AI in human resources, “from hiring to promotion to firing.”
“This is an incredibly regulated and dangerous field that increases the risk of algorithmic bias and discrimination,” he said.
Jackson said his clients, large and small, are looking for good control.
Jackson said in an interview that large customers are “interested in understanding what the right governance model is for deploying AI, building AI, partnering AI.” Smaller customers, on the other hand, are more likely to be building governance practices from scratch, he said.
“And governance means thinking through the processes, the controls, the laws and regulations that may apply, the best practices that may apply,” Jackson said. “So everyone is thinking about the best way to approach AI.”
DLA Piper and Clifford Chance are among 280 people selected to join the Artificial Intelligence Safety Institute Consortium, part of the National Institute of Standards and Technology.
According to the AI Safety Institute, its goal is to “develop science-based, empirically supported guidelines and standards for AI measurement and policy, laying the foundation for AI safety around the world.”
Although Parliament has not yet passed broad legislation covering the use of AI, the European Union’s AI law, which came into force in August, will require AI systems to be used in decision-making that affects EU citizens. Clifford said it also applies to implementing multinational companies. In his advice to clients, Chance said:
EU legislation prohibiting discrimination and bias will have “significant implications for employers and HR professionals who use or plan to use AI systems in operations, recruitment, performance assessment, talent management and workforce monitoring.” ,” said Clifford Chance. .
“Customers, especially those with a global presence, want to know how they should think about applying EU AI laws to their broader operations, not just within the EU, but perhaps outside the EU,” Jackson said. . Clients are seeking advice on creating a set of practices that will be accepted across jurisdictions, as “a segmented approach by market is clearly not practical,” Jackson said.
Tony Samp, head of AI policy at DLA Piper, said companies are trying to figure out what kind of AI guardrails will be enacted in the United States.
“There is a simultaneous need for the companies our data analysts, red teams, and lawyers work with to understand the AI regulatory landscape in Washington, D.C., and the direction of Congressional interest,” Sump said in an email. said.
Mr. Sump previously spoke with Sen. Martin Heinrich (D.M.), one of four senators appointed by Senate Majority Leader Charles E. Schumer to write a report on AI innovation and regulation. He served as a senior advisor.
Mr. Sump said the law firm recently hired former Republican Sen. Richard M. Burr, who served as chairman of the Intelligence Committee, to advise clients on the direction of U.S. AI legislation. He reportedly hired a North Carolina representative.