Close Menu
Versa AI hub
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

What's Hot

Artificial Power: 2025 Landscape Report

June 2, 2025

The “MasterClass” sessions on the second and third days of AMS 2025 will be available to help you create AI Tools and Technology Solutions for Content Creation

June 2, 2025

Address bias and ensure compliance with AI systems

June 2, 2025
Facebook X (Twitter) Instagram
Versa AI hubVersa AI hub
Monday, June 2
Facebook X (Twitter) Instagram
Login
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
Versa AI hub
Home»AI Legislation»How AI can support justice instead of undermining it
AI Legislation

How AI can support justice instead of undermining it

By November 19, 2024No Comments6 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email

Interpol Director-General Jürgen Stock recently warned that artificial intelligence (AI) is facilitating crime on an “industrial scale” through the use of deepfakes, voice simulations, and forged documents. .

Police forces around the world are also turning to AI tools such as facial recognition, automatic license plate readers, gunshot detection systems, social media analytics, and even police robots. As judges adopt new guidelines for the use of AI, the use of AI by lawyers is similarly “spiking.”

While AI promises to transform criminal justice by improving operational efficiency and improving public safety, it also comes with risks related to privacy, accountability, fairness, and human rights.

Concerns about AI bias and discrimination are well-documented. Without safeguards, AI risks undermining the very principles of truth, fairness, and accountability on which our justice system relies.

A recent report from the University of British Columbia’s Faculty of Law, Artificial Intelligence and Criminal Justice: An Introduction highlighted the myriad impacts that AI is already having on people in the criminal justice system. Here are some examples that highlight the importance of this evolving phenomenon.

The promises and dangers of AI-powered police

In 2020, a New York Times investigation revealed the widespread influence of Clearview AI, a US company that had built a facial recognition database using more than 3 billion images collected from the internet, including social media, without users’ consent. was exposed.

Police agencies around the world that have used the program have faced public backlash, including some in Canada. Regulators in multiple countries have found that the company violated privacy laws. It was asked to cease operations in Canada.

Clearview AI continues to operate, with success stories such as helping exonerate the wrongly convicted by identifying witnesses at crime scenes. The person who exploited the child was identified, which led to the child’s rescue. They even spotted potential Russian soldiers trying to infiltrate Ukrainian checkpoints.

May 2023 Police Facial Recognition Activity Notice from the Metropolitan Police.
(Shutterstock)

But facial recognition is prone to false positives and other errors, especially when identifying Black people and other racialized people, and has long-standing implications that exacerbate systemic racism, bias, and discrimination. I have concerns.

Some Canadian law enforcement agencies embroiled in the Clearview AI controversy have since responded with new measures, such as the Toronto Police Service’s policy on AI use and the RCMP’s transparency program.

But other agencies, like the Vancouver Police Department, have promised to develop policies but have not delivered, while at the same time seeking access to the city’s traffic camera footage.

To safely navigate the promise and dangers of AI use, regulating the use of AI by police is a pressing concern.

Deepfake evidence in court

Another area where AI is presenting challenges in the criminal justice system is deepfake evidence such as AI-generated documents, audio, photos, and videos.

This phenomenon has already led to cases in which one side claims the other’s evidence is a deepfake, casting doubt even when it is legitimate. This is called the “liar’s dividend.”

A high-profile example of allegations involving deepfake evidence is the case of Joshua Doolin, who was charged and ultimately convicted of crimes related to the January 6, 2021, riot at the U.S. Capitol. I woke up. Doolin’s lawyers argued that prosecutors should be required to authenticate video evidence provided by YouTube, raising concerns about the potential use of deepfakes.

Given high-profile deepfake cases involving the use of AI technology by celebrities and celebrities themselves, juries may be particularly susceptible to doubts about the possibility of deepfakes.

Judges also warned of the challenges of detecting the increasingly sophisticated deepfake evidence admitted in court. There are concerns that this could lead to wrongful convictions or acquittals.

I have personally heard from many legal practitioners, including judges and lawyers, who are struggling to address this issue. This is a topic often covered in legal seminars and judicial training sessions. Until there is a clear statement from the Court of Appeals on this issue, legal uncertainty will remain.

Risk assessment algorithm

Imagine if an AI algorithm you didn’t understand determined you were a flight risk or high risk of reoffending, and that information was used by a judge or parole board to deny you release from custody. Please. This dystopian reality is not fiction but reality in many parts of the world.

Automated algorithmic decision-making is already being used in a wide variety of countries, from decisions about access to government benefits and housing, domestic violence risk assessments, immigration decisions, and bail decisions to sentencing, prison classification, and parole decisions. It is used in numerous criminal justice applications leading to results.

Those affected by these algorithms typically lose access to the underlying proprietary software. Even if they can be penetrated, they are often “black boxes” and impossible to penetrate.

To make matters worse, studies of some algorithms have found serious concerns about racial bias. The main reason for this problem is that AI models are trained on data from societies that already have systemic racism embedded in them. The adage “garbage in, garbage out” is often used to explain this.

people walking on the sidewalk
Studies of some algorithms have uncovered serious concerns about racial bias.
(Shutterstock)

Promoting innovation while upholding justice

The need for legal and ethical AI in high-risk situations related to criminal justice is paramount. There is no doubt that we need new laws, regulations, and policies specifically designed to address these challenges.

The European Union’s AI law would prohibit the collection of untargeted images from the internet or CCTV, real-time remote biometric identification in public places (with limited exceptions), and the profiling and determination of recidivism risk based solely on personality traits. AI is prohibited for purposes such as evaluation.

Canadian legislation is lagging behind and proposed legislation has challenges. At the federal level, Bill C-27 (which includes the Artificial Intelligence and Data Act) has been pending in committee for over a year and is unlikely to be adopted in this parliament.

Ontario’s proposed AI bill, “Bill 194,” exempts police from applying the AI ​​law and does not include provisions to ensure respect for human rights.

Canada must vigorously enforce existing laws and policies that already apply to the use of AI by public authorities. The Canadian Charter of Rights and Freedoms contains a number of fundamental freedoms, legal rights and equal protections that are directly related to these issues. Similarly, privacy law, human rights law, consumer protection law, and tort law all set important standards for the use of AI.

The potential impact of AI on people in the criminal justice system is enormous. Without thoughtful and rigorous oversight, we risk undermining public trust in our justice system and perpetuating existing problems with real human consequences.

Fortunately, Canada is not yet as far along the path of widespread adoption of AI in criminal justice as other countries. We still have time to get ahead of it. Policymakers, courts, and civil society must act quickly to ensure that AI delivers justice, rather than undermining it.

author avatar
See Full Bio
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleCalifornia enacts law regulating use of generative AI in healthcare
Next Article Can the CREATE AI method cross the finish line?

Related Posts

AI Legislation

Congress begins to consider first federal AI regulations | Area

June 1, 2025
AI Legislation

AI and Employment | Constangy, Brooks, Smith & Prophete, LLP

May 30, 2025
AI Legislation

From California to Kentucky: Tracking the rise of state AI laws in 2025 | White & Case LLP

May 29, 2025
Add A Comment
Leave A Reply Cancel Reply

Top Posts

The UAE announces bold AI-led plans to revolutionize the law

April 22, 20253 Views

The UAE will use artificial intelligence to develop new laws

April 22, 20253 Views

New report on national security risks from weakened AI safety frameworks

April 22, 20253 Views
Stay In Touch
  • YouTube
  • TikTok
  • Twitter
  • Instagram
  • Threads
Latest Reviews

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

Most Popular

The UAE announces bold AI-led plans to revolutionize the law

April 22, 20253 Views

The UAE will use artificial intelligence to develop new laws

April 22, 20253 Views

New report on national security risks from weakened AI safety frameworks

April 22, 20253 Views
Don't Miss

Artificial Power: 2025 Landscape Report

June 2, 2025

The “MasterClass” sessions on the second and third days of AMS 2025 will be available to help you create AI Tools and Technology Solutions for Content Creation

June 2, 2025

Address bias and ensure compliance with AI systems

June 2, 2025
Service Area
X (Twitter) Instagram YouTube TikTok Threads RSS
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
© 2025 Versa AI Hub. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.

Sign In or Register

Welcome Back!

Login to your account below.

Lost password?