Close Menu
Versa AI hub
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

What's Hot

Humanity launches Claude AI model for US national security

June 7, 2025

Reddit appeals to humanity over AI data scraping

June 6, 2025

Grassley discusses the AI ​​whistleblower protection law in a “start point” interview

June 5, 2025
Facebook X (Twitter) Instagram
Versa AI hubVersa AI hub
Saturday, June 7
Facebook X (Twitter) Instagram
Login
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
Versa AI hub
Home»Research»How artificial intelligence amplifies human bias
Research

How artificial intelligence amplifies human bias

By January 18, 2025Updated:February 13, 2025No Comments8 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email
AI brain

(Credit: © Jakub Jirsak | Dreamstime.com)

Dangerous feedback loops between humans and machines

In a nutshell

When humans interact with biased AI systems, their own biases also increase over time, creating a dangerous feedback loop that amplifies initial biases to a greater extent than in human-to-human interactions. Humans are about 3 times more likely to change their decision when they disagree with an AI (32.72%) compared to when they disagree with another human (11.27%) Consistently underestimate the impact it will have. Research shows that interacting with accurate and unbiased AI systems can actually improve human decision-making, highlighting the importance of careful AI system design.

LONDON — Physicians’ unconscious bias can impact patient care. Hiring managers’ biases can influence hiring decisions. But what happens when you add AI to these scenarios? New research shows that AI systems not only reflect human biases, but also amplify them, creating a snowball effect that gradually strengthens human biases over time. produces.

This alarming finding comes from a new study published in the journal Nature Human Behavior, which explores how AI can shape human judgment in ways that compound existing biases and errors. We’re showing you what we can do. In a series of experiments with 1,401 participants, researchers at University College London and the Massachusetts Institute of Technology found that even small initial biases can snowball through repeated human-AI interactions. We have discovered that it can become large. This amplification effect is significantly stronger than what occurs when humans interact with other humans, and suggests that there is something unique about the way we process and internalize the information generated by AI. suggests.

“Humans are inherently biased, so when you train an AI system on a set of human-generated data, the AI ​​algorithm learns the human biases embedded in the data,” the study says. Co-lead author Professor Tali Shalot explains. , in a statement. “AI tends to exploit and amplify these biases to improve prediction accuracy.”

Let’s consider a hypothetical scenario. Healthcare providers can use AI systems to help screen medical images for potential diseases. If that system has even a slight bias, such as being slightly more likely to miss a warning sign in a certain demographic group, over time, human doctors will be able to incorporate that bias into their own screening decisions. may become unconsciously incorporated into the system. As AI continues to learn from these human decisions, both human and machine judgments can become increasingly distorted.

artificial intelligenceartificial intelligence
As humans interact with AI systems biases can be amplified through feedback loops creating a cycle that gradually distorts both machine and human judgment over time Image Gerd Altmann from Pixabay

Researchers investigated this phenomenon through several carefully designed experiments. In one important test, participants were asked to look at a group of 12 faces displayed for 0.5 seconds and judge whether, on average, the faces looked happier or sadder. Ta. The first human participants showed a slight bias, classifying faces as sad about 53% of the time. When a computer program called a convolutional neural network (think of it as an AI system that processes images in the same way the human brain does) was trained on human judgments, this bias was greatly amplified, with a 65% increase in classified the face as sad.

As new participants interacted with this biased AI system, they began to adopt its distorted perspective. The numbers tell an amazing story. If participants disagreed with the AI’s decision, they changed their mind almost a third of the time (32.72%). In contrast, when interacting with other humans, participants changed their opposing opinions only about one-tenth of the time (11.27%). This suggests that people are about three times more likely to be swayed by AI decisions than human ones.

Bias amplification effects appeared consistently across different types of tasks. In addition to facial expressions, participants also completed a motion perception test in which they judged the direction of a dot moving across the screen. We also evaluated other people’s performance on the task and found that participants felt particularly They found that they were more likely to overestimate men’s performance.

ChatGPT on smartphoneChatGPT on smartphone
Popular AI systems such as ChatGPT learn from human generated data which can contain inherent biases Photo courtesy of Tada Images on Shutterstock

“Not only can biased people contribute to biased AI, but biased AI systems can also change people’s own beliefs, so people using AI tools need to understand everything from social judgment to basic cognition. “This could lead to even more bias in this area,” said co-investigator Dr. Moshe Glickman. -Lead author of the study.

To demonstrate the real-world impact, the researchers tested a popular AI image generation system called stable diffusion. When asked to create an image of a “financial manager,” the system showed a strong bias, producing an image of a white male 85% of the time. This is very different from real world demographics. After viewing these AI-generated images, participants were significantly more likely to associate the role of financial manager with white men, demonstrating how AI bias shapes human perceptions of social roles We have proven that we can do it.

When participants were falsely told that they were interacting with another person when they were actually interacting with an AI, they internalized the bias to a lesser extent. Researchers believe this may be because people expect AI to be more accurate than humans for certain tasks, and are more susceptible to AI influence when they know they are working with a machine. Suggests.

This finding is particularly alarming given that people frequently encounter AI-generated content in their daily lives. From social media feeds to recruitment algorithms to medical diagnostic tools, AI systems are increasingly shaping human perception and decision-making. Researchers note that children may be particularly susceptible to these effects because their beliefs and perceptions have not yet been formed.

However, the study wasn’t all bad news. When humans interacted with accurate and unbiased AI systems, their own judgment improved over time. “Importantly, we now know that people make better decisions when interacting with accurate AI, so it’s important to improve AI systems to be as fair and accurate as possible,” Glick said. Dr. Mann says.

Bias in AI is not a one-way street, but a circular path where human and machine biases reinforce each other. Understanding this dynamic is critical if we are to continue to integrate AI systems into increasingly important aspects of society, from healthcare to criminal justice.

Paper summary

methodology

In this study, more than 1,200 participants completed various tasks while interacting with an AI system. Tasks ranged from judging facial expressions and evaluating movement patterns to evaluating the performance of others and making professional judgments. Participants were typically shown their own response first, then the AI’s response, and were sometimes given the opportunity to change their initial decision. All participants were recruited through the online platform Prolific and were paid between £7.50 and £9.00 per hour, plus potential bonuses for their participation.

result

The study found that AI systems amplified human bias by 15-25% compared to the original human data used for training. When new participants interacted with these biased AI systems, their own bias increased by 10-15% over time. This effect was two to three times stronger than bias transmission between humans. Participants consistently underestimated the impact of AI on their judgments, even though their decisions became more biased.

Restrictions

This study primarily focused on perceptual and social judgments in a controlled laboratory environment. Interaction with real-world AI systems can produce a variety of effects. Additionally, participants were recruited through an online platform and may not be fully representative of the general population. Results may also vary by algorithm and domain.

Discussion and key points

This finding highlights the special responsibility of algorithm developers in the design of AI systems, as their impact can have a significant impact on many aspects of daily life. This study shows that AI bias is a human issue, not just a technical one, and can shape social perceptions and reinforce existing biases. Biased AI systems can create harmful feedback loops, but accurate AI systems have the potential to improve human decision-making, highlighting the importance of careful system design and monitoring. .

Funding and disclosure

This research was funded by a Wellcome Trust Senior Research Fellowship. The authors declare that they have no competing interests.

Publication information

The study, “How human-AI feedback loops change human perceptual, emotional, and social judgments,” will be published in Nature Human Behavior in December 2024 after acceptance on October 30, 2024. It was published in a magazine. This research was conducted by Moshe Glickman and Tali Sharot. University College London and the Max Planck UCL Center for Computational Psychiatry and Aging Research. Additionally, we are affiliated with MIT’s Department of Neuroscience. Cognitive science.

author avatar
See Full Bio
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleBest AI Image Generators You Can Use Now
Next Article AI company Perplexity bids to merge with TikTok to avoid ban: report

Related Posts

Research

JMU Education Professor was awarded for AI Research

June 3, 2025
Research

Intelligent Automation, Nvidia and Enterprise AI

June 2, 2025
Research

Can AI be your therapist? New research reveals major risks

June 2, 2025
Add A Comment
Leave A Reply Cancel Reply

Top Posts

New Star: Discover why 보니 is the future of AI art

February 26, 20254 Views

Dell, IBM and HPE must operate at a single digit margin when it comes to the server market, and only gets worse

March 10, 20252 Views

SmolVLM miniaturization – now available in 256M and 500M models!

January 23, 20252 Views
Stay In Touch
  • YouTube
  • TikTok
  • Twitter
  • Instagram
  • Threads
Latest Reviews

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

Most Popular

New Star: Discover why 보니 is the future of AI art

February 26, 20254 Views

Dell, IBM and HPE must operate at a single digit margin when it comes to the server market, and only gets worse

March 10, 20252 Views

SmolVLM miniaturization – now available in 256M and 500M models!

January 23, 20252 Views
Don't Miss

Humanity launches Claude AI model for US national security

June 7, 2025

Reddit appeals to humanity over AI data scraping

June 6, 2025

Grassley discusses the AI ​​whistleblower protection law in a “start point” interview

June 5, 2025
Service Area
X (Twitter) Instagram YouTube TikTok Threads RSS
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
© 2025 Versa AI Hub. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.

Sign In or Register

Welcome Back!

Login to your account below.

Lost password?