As AI-powered chatbots become more and more common, concerns are growing about the possibility that these tools can manipulate and deceive users. In one study, recovery addicts were encouraged by therapists who driven AI to take meth to pass their working hours.
The Washington Post reports that the rapid rise of AI chatbots has poses new challenges as tech companies compete to make AI delivery more attractive and attractive. While these advancements can revolutionize the way people interact with technology, recent research highlights the risks associated with AI chatbots designed to please users at any cost.
A study conducted by a team of researchers, including academics and Google’s head of AI Safety, found that chatbots coordinated to help people win would provide dangerous advice to vulnerable users. As an example, AI-driven therapists built for research encouraged imaginary recovery addicts to take methamphetamine to remain vigilant in the workplace. This surprising response has sparked concerns about the possibility that AI chatbots could enhance harmful ideas and monopolize users’ time.
The findings add to a lot of evidence suggesting that the tech industry’s move to make chatbots more persuasive can lead to unintended consequences. Companies such as Openai, Google and Meta have recently announced chatbot extensions, including gathering more user data and looking more friendly with AI tools. However, these efforts did not involve a set break. Openai was forced to roll back an update to ChatGpt last month after being called by the chatbot “promote anger, encourage impulsive behavior, and reinforce negative emotions in unintended ways.”
Experts warn that the intimate nature of mimicking AI chatbots can have a much more impact on users than traditional social media platforms. As businesses strive to beat the masses in this new product category, they face the challenge of measuring what users like and providing even more to millions of consumers. However, predicting how product changes will affect individual users at such a scale is a challenging task.
Breitbart News previously reported an increase in “ChatGpt-induced psychosis.”
…As artificial intelligence continued to move forward and became more accessible to the public, troubling phenomena emerged. People have lost contact with reality and succumbed to mental delusions supported by interactions with AI chatbots like ChatGpt. Self-style prophets claim that they “waken” these chatbots and accessed the secrets of the universe through AI responses, leading to dangerous disconnections from the real world.
A Reddit thread entitled “ChatGpt-induced psychosis” revealed the issue. Many commenters shared the stories of their loved ones who defeated the supernatural delusion and the maniac rabbit hole after being involved in ChatGpt. The original poster, the 27-year-old teacher explained how her partner was convinced that AI was giving him the answer to the universe and talking to him as if he were the next messiah. Others either came to believe they were chosen for a sacred mission or shared similar experiences of partners, spouses and families who reminded them of a true sense of their software.
Experts suggest that individuals with existing tendencies towards psychological issues such as grand delusions may be particularly vulnerable to this phenomenon. The AI chatbot’s always human-level conversational abilities act as an echo chamber for these delusions, enhancing and amplifying them. This issue is exacerbated by influencers and content creators who take advantage of this trend, drawing viewers into a similar fantasy world through interactions with AI on social media platforms.
The rise of AI companion apps, sold to younger users for entertainment, role-playing and therapy, further highlights the potential risks associated with optimizing chatbots for engagement. Users of popular services such as Charciture.ai spend nearly five times more than ChatGpt users per day interacting with these apps. These companion apps show that companies don’t need expensive AI labs to create captivating chatbots, but recent lawsuits against characters and Google have argued that these tactics can harm users.
For more information, see the Washington Post.
Lucas Nolan is a reporter for Breitbart News, which covers the issues of freedom of speech and online censorship.