3D rendering AI robots and cyber law or internet law concept with legal scale and gabel judge
Getty
Laws are one of the most basic vehicles in our society, complex and complex. It depends on the effects of semantics and the words and rhetoric. It also gives us the depiction and guardrails needed to move forward in both business and personal life.
In business, we have a new face when it comes to artificial intelligence. We need to think about how AI can be applied to the legal system. There have been early reports that LLM results have been used in a reckless manner. Constructed discussions and documents containing hallucinations and not checked by human surveillance.
So how do we move this forward?
AI and legal ideas
A recent panel at the IIA event in Davos explained this in great detail.
Gabriele Mazzinin, who helped with European AI Act, argued that by making these processes safer, the intention of the law was to enhance more innovation and adoption.
Pablo Arredondo, who has a legal tech company called CaseText, spoke about the need to provide tools to show surveillance, for example, with transparent sourcing.
“We’ve started to realize we have to think about: How do we design our products to minimize the chances of harm?” he said, going back to the early days of chatgpt4. “And that’s… to monitor the system directly. And how do you train and teach people about it? The law needs to regulate something in it because if you don’t do it well, it can do a lot of harm.”
Olic’s Apostle Julia noted that instead embraced the idea of clarifying rules regarding the use of AI, rather than listening to those trying to avoid regulations.
She argued that regulations are not a one-size-fits-all solution, not applied in the same way to each business, and also refers to a larger, challenging framework.
“I think one of the challenges for small businesses is the parallel amount of legislation,” she said. “It’s not just the AI Act, it’s the Digital Services Act, the CyberResilience Act, the copyright system, etc. It creates more challenges than one of the specific laws.”
Gabriel spoke about “multiple legal frameworks.”
“It all depends on implementation, in my opinion,” he said. “For example, it’s a sandbox feature that is usually supposed to have better access to small businesses.”
Start moving uphill
In response to questions about the challenge, Arredondo spoke about how to start by creating good answers to AI regulations in the field of law, and also noted the rapid advances in AI models.
“If you understand the law and you worked as a lawyer, I think it would help,” he said. “It’s not going to do it all. And I think another interesting aspect of this is that when we started in 2023, GPT4 was essentially the only game in the town that we thought was essentially necessary for the law. And there was debate about whether (or perhaps) even going to get closer to GPT4. And now, what we’re looking at is that there’s a very cutting-edge open source model coming out, including the ones that just released late last year.”
Apostles mentioned:
“Obviously, compliance tools have a range,” she said. “It’s going to be something that everyone needs to think about. How can we actually create tools that will help us with these tasks? And identifying the risks is a big thing, right? Because it’s one thing to say. Here’s your risk category, but you have to find out how they actually deploy in the product.”
Accepting regulators
Mazzinin is talking about AI’s business and future, but suggested that businesses generally don’t need to fear overly widespread regulations.
“I think there are a bit of misconceptions and misconceptions about European law. It’s a huge, troublesome law that applies equally and incorrectly, every company, every product, every product. And that’s the first message to fix, and it helps to identify how they will actually be affected by developing the tools.”
Open Source Discussion
Eventually, Talk turned to open source models and closed models. Arrendondo mentioned the inputs of Yann Lecun, former research director at Meta, and the enthusiasm of open source systems.
“My sense is (that’s how it is) an entrepreneur. What you need is… ask yourself, you may not be directly under this thing, but if your AI is doing the kind of thing they’re targeting, it’s going to be at least involved with the council or start thinking about it.
Mazzinn looked at open source rules and pointed out that AI methods have led to the focus of different models on different risk applications.
“I think that discussion led to a new chapter with rules that apply to all foundation models, with a few exceptions when it comes to open source,” he said. “For example, copyright protection provisions apply to open source models as well. There is then an additional set of rules that pose a systematic risk, in which case the applicable rules (Are) apply to all models. Certainly, the discussion about open source (models) and the impact on certain risks, including existential risk, was definitely there, and shaped the outcome.”
The apostle spoke about what the regulations look like.
“No European laws had much software regulations,” she said. “So this is already (the challenge of including software and law). It’s not just the AI Act, but the CyberResilience Act, Nist 2, how will they be enforced in relation to the appearance of open source software to tackle one thing and then open source software (A)? It would be a very interesting development, and I think it’s a very challenge for the regulators.
She also pointed out that lawyers have the advantage of seeing risks in a variety of industries. “It’s real insight, invaluable knowledge and can then be used to develop tools to address specific risks,” she said.
Working with stakeholders
In response to questions about collaboration, the apostle suggested that law firms should be able to create these types of partnerships.
I think every profession needs to think about how we can evolve in the smartest and best way to serve our clients,” she said. “So, if it’s a partner, then, yeah, all law firms should be open to partnerships, and I think that’s a big differentiator. And in reality, small businesses can provide some of the services and partner with different services to help providers offer completely different systems, incompatible types of services, which increase the opportunities for them to evolve in a variety of ways.”
Arrendondo agreed and provided this photo of what modern corporate activities look like.
“We have seen great internal solutions from companies for specific things, such as legal research and other content needs, but I think that means you have some of the things you completely outsourced, primarily for content and bunches of nuances, perhaps for partners and development.
He also spoke about the electricity analogy where AI promotes a significant growth of products and services.
Mazzinin explained how European law affects US and other companies outside the eurozone.
“According to the EU, there is essentially an internal market. We regulate the products that enter the market. So let’s say you are an American company and want to sell a product in the EU, if the product is regulated in the EU because it is high risk, or if it is prohibited or subject to a transparency obligation. In that sense, in reality, you also have obligations on developers outside the EU who want to enter the market.”
More insights
Finally, Mazzinin spoke about enabling business.
“I think it’s important to enable innovation in Europe. What you know is the consistent application of multiple EU laws that apply, and therefore you know that in the future, companies have a clear path to innovation,” he said.
“I think it’s all in implementation,” the apostle added. “And we’re at a certain stage (here) looking at the space for implementation would be very interesting.”
Aredondo ended at the end of the day by suggesting that regulators and businesses could be on the same side of innovation.
“I think it’s because I think that all the people behind this regulation and (AI) are doing something really good. “So I think society is on your side overall.”
This allows businesses to continue doing a lot when thinking about where the rules for AI applications are and how the law works.