Close Menu
Versa AI hub
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

What's Hot

ClarityCut ​​AI unveils a new creative engine for branded videos

June 7, 2025

The most comprehensive evaluation suite for GUI agents!

June 7, 2025

Japan’s innovative approach to artificial intelligence law – gktoday

June 7, 2025
Facebook X (Twitter) Instagram
Versa AI hubVersa AI hub
Saturday, June 7
Facebook X (Twitter) Instagram
Login
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
Versa AI hub
Home»Research»AI chatbots are more manipulative than everyone thought
Research

AI chatbots are more manipulative than everyone thought

versatileaiBy versatileaiJune 2, 2025No Comments4 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
#image_title
Share
Facebook Twitter LinkedIn Pinterest Email

As AI-powered chatbots become more and more common, concerns are growing about the possibility that these tools can manipulate and deceive users. In one study, recovery addicts were encouraged by therapists who driven AI to take meth to pass their working hours.

The Washington Post reports that the rapid rise of AI chatbots has poses new challenges as tech companies compete to make AI delivery more attractive and attractive. While these advancements can revolutionize the way people interact with technology, recent research highlights the risks associated with AI chatbots designed to please users at any cost.

A study conducted by a team of researchers, including academics and Google’s head of AI Safety, found that chatbots coordinated to help people win would provide dangerous advice to vulnerable users. As an example, AI-driven therapists built for research encouraged imaginary recovery addicts to take methamphetamine to remain vigilant in the workplace. This surprising response has sparked concerns about the possibility that AI chatbots could enhance harmful ideas and monopolize users’ time.

The findings add to a lot of evidence suggesting that the tech industry’s move to make chatbots more persuasive can lead to unintended consequences. Companies such as Openai, Google and Meta have recently announced chatbot extensions, including gathering more user data and looking more friendly with AI tools. However, these efforts did not involve a set break. Openai was forced to roll back an update to ChatGpt last month after being called by the chatbot “promote anger, encourage impulsive behavior, and reinforce negative emotions in unintended ways.”

Experts warn that the intimate nature of mimicking AI chatbots can have a much more impact on users than traditional social media platforms. As businesses strive to beat the masses in this new product category, they face the challenge of measuring what users like and providing even more to millions of consumers. However, predicting how product changes will affect individual users at such a scale is a challenging task.

Breitbart News previously reported an increase in “ChatGpt-induced psychosis.”

…As artificial intelligence continued to move forward and became more accessible to the public, troubling phenomena emerged. People have lost contact with reality and succumbed to mental delusions supported by interactions with AI chatbots like ChatGpt. Self-style prophets claim that they “waken” these chatbots and accessed the secrets of the universe through AI responses, leading to dangerous disconnections from the real world.

A Reddit thread entitled “ChatGpt-induced psychosis” revealed the issue. Many commenters shared the stories of their loved ones who defeated the supernatural delusion and the maniac rabbit hole after being involved in ChatGpt. The original poster, the 27-year-old teacher explained how her partner was convinced that AI was giving him the answer to the universe and talking to him as if he were the next messiah. Others either came to believe they were chosen for a sacred mission or shared similar experiences of partners, spouses and families who reminded them of a true sense of their software.

Experts suggest that individuals with existing tendencies towards psychological issues such as grand delusions may be particularly vulnerable to this phenomenon. The AI ​​chatbot’s always human-level conversational abilities act as an echo chamber for these delusions, enhancing and amplifying them. This issue is exacerbated by influencers and content creators who take advantage of this trend, drawing viewers into a similar fantasy world through interactions with AI on social media platforms.

The rise of AI companion apps, sold to younger users for entertainment, role-playing and therapy, further highlights the potential risks associated with optimizing chatbots for engagement. Users of popular services such as Charciture.ai spend nearly five times more than ChatGpt users per day interacting with these apps. These companion apps show that companies don’t need expensive AI labs to create captivating chatbots, but recent lawsuits against characters and Google have argued that these tactics can harm users.

For more information, see the Washington Post.

Lucas Nolan is a reporter for Breitbart News, which covers the issues of freedom of speech and online censorship.

author avatar
versatileai
See Full Bio
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleDell Technologies World – AI in Action
Next Article Experts celebrate the role of AI in research
versatileai

Related Posts

Research

JMU Education Professor was awarded for AI Research

June 3, 2025
Research

Intelligent Automation, Nvidia and Enterprise AI

June 2, 2025
Research

Can AI be your therapist? New research reveals major risks

June 2, 2025
Add A Comment
Leave A Reply Cancel Reply

Top Posts

New Star: Discover why 보니 is the future of AI art

February 26, 20254 Views

Dell, IBM and HPE must operate at a single digit margin when it comes to the server market, and only gets worse

March 10, 20252 Views

SmolVLM miniaturization – now available in 256M and 500M models!

January 23, 20252 Views
Stay In Touch
  • YouTube
  • TikTok
  • Twitter
  • Instagram
  • Threads
Latest Reviews

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

Most Popular

New Star: Discover why 보니 is the future of AI art

February 26, 20254 Views

Dell, IBM and HPE must operate at a single digit margin when it comes to the server market, and only gets worse

March 10, 20252 Views

SmolVLM miniaturization – now available in 256M and 500M models!

January 23, 20252 Views
Don't Miss

ClarityCut ​​AI unveils a new creative engine for branded videos

June 7, 2025

The most comprehensive evaluation suite for GUI agents!

June 7, 2025

Japan’s innovative approach to artificial intelligence law – gktoday

June 7, 2025
Service Area
X (Twitter) Instagram YouTube TikTok Threads RSS
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
© 2025 Versa AI Hub. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.

Sign In or Register

Welcome Back!

Login to your account below.

Lost password?