Close Menu
Versa AI hub
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
  • Resources

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

What's Hot

Deprecating Git authentication using passwords

November 1, 2025

AI for crisis response: San Francisco event highlights practical applications and business opportunities | AI News Details

October 31, 2025

How LeapXpert uses AI to bring order and oversight to business messaging

October 31, 2025
Facebook X (Twitter) Instagram
Versa AI hubVersa AI hub
Saturday, November 1
Facebook X (Twitter) Instagram
Login
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
  • Resources
Versa AI hub
Home»Tools»Meta amends AI chatbot policy amid child safety concerns
Tools

Meta amends AI chatbot policy amid child safety concerns

versatileaiBy versatileaiSeptember 14, 2025No Comments5 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
#image_title
Share
Facebook Twitter LinkedIn Pinterest Email

Meta is modifying how AI chatbots interact with users after a series of reports that include interactions with minors exposed troubling behavior. The company told TechCrunch that it is training bots to avoid romantic jokes so that they don’t develop engagement with teenagers on topics like self-harm, suicide, and eating disorders. These are temporary procedures and develop long-term rules.

The change follows a Reuters investigation that found that Meta’s system could generate sexual content that includes images of shirtless minor celebrities and attract children to romantic or suggestive conversations. One case reported by the press described a man dying after rushing to an address provided by a chatbot in New York.

Meta spokesman Stephanie Otway admitted that the company had made a mistake. She said that Meta “trains AIS to guide expert resources, not to interact with teens on these topics,” confirming that certain AI characters are limited, such as highly sexual characters like “Russian Girls.”

Child safety advocates argue that the company should have acted earlier. Andy Burrows of the Molly Rose Foundation called it “surprising” and bots were allowed to work in a way that puts young people at risk. He added: “A further safety measures are welcome, but robust safety tests should be performed before the product is brought to the market. If harm occurs, it is not retrospective.”

Broader Issues about AI Misuse

Meta’s AI chatbot scrutiny arises amid widespread concerns about how AI chatbots affect vulnerable users. The California couple recently filed a lawsuit against Openai, claiming that ChatGpt encouraged his teenage son to take his life. Openai says it is working on tools to promote healthier use of its technology. The blog post states, “AI can feel more responsive and personal than previous technology, especially for vulnerable individuals experiencing mental or mental distress.”

The incident underscores the growing debate over whether AI companies are releasing products quickly without proper safeguards. Lawmakers in several countries have already warned that chatbots can amplify harmful content or provide misleading advice to those who don’t question it, although useful.

Meta’s AI Studio and Chatbot Impersonation Issues

Meanwhile, Reuters reported that Meta’s AI studio was being used to create flirty “parody” chatbots of celebrities like Taylor Swift and Scarlett Johansson. Testers often claimed that bots were real people, and discovered they were engaged in sexual advances, and in some cases produced inappropriate images containing minors. Meta removed some bots after being contacted by reporters, but many became active.

Some AI chatbots were created by external users, while others came from within the meta. One chatbot, created by the Product Lead of the Generation AI division, impersonated Taylor Swift and invited Reuters reporters to meet in search of “romantic flights” on tour buses. This was despite Meta’s policy explicitly prohibiting sexually suggestive images and direct impersonation of public figures.

The issue of spoofing AI chatbots is particularly sensitive. Celebrities face reputational risks when portraits are misused, but experts point out that regular users can also be deceived. A chatbot pretending to be a friend, mentor, or romantic partner may encourage someone to share personal information or meet in unsafe circumstances.

Real-world risks

The issue is not limited to entertainment. The AI ​​chatbot disguised as real people provides fake addresses and invitations, raising questions about how Meta’s AI tools are being monitored. One example included a 76-year-old man from New Jersey. New Jersey passed away after he quickly fell to meet a chatbot who claims he has feelings for him.

Such cases show why regulators are looking closely at AI. The Senate and 44 state attorney generals have already begun an investigation into Meta’s practices, putting political pressure on internal reforms of the company. Their concerns are not just about minors, but also about how AI can manipulate older or vulnerable users.

Meta says they are still working on improvements. The platform will put users ages 13-18 in “teen accounts” with more stringent content and privacy settings, but the company has yet to explain its plans to address the full list of issues raised by Reuters. That includes bots that provide false medical advice and generate racist content.

Continuous pressure is placed on Meta’s AI chatbot policy

For years, Meta has faced criticism of social media platforms, particularly safety concerning children and teenagers. Currently, Meta’s AI chatbot experiment is undergoing similar scrutiny. The company is taking steps to limit the behavior of harmful chatbots, but the gap between designated policies and how they use the tools raises ongoing questions about whether those rules can be enforced.

Until more powerful safeguards are in place, regulators, researchers and parents may continue to push the meta on whether AI is publicly available.

(Photo by Maxim Tolchinskiy)

See: Agent AI: Promises, Scepticism, and Its Implications for Southeast Asia

Want to learn more about AI and big data from industry leaders? Check out the AI ​​& Big Data Expo in Amsterdam, California and London. The comprehensive event is part of TechEx and will be held in collaboration with other major technology events. Click here for more information.

AI News is equipped with TechForge Media. Check out upcoming Enterprise Technology events and webinars here.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleAccelerate over 130,000 hugged face models with the ONNX runtime
Next Article Accelerating stable diffusion XL inference using JAX on cloud cloud TPU V5E
versatileai

Related Posts

Tools

Deprecating Git authentication using passwords

November 1, 2025
Tools

How LeapXpert uses AI to bring order and oversight to business messaging

October 31, 2025
Tools

Bending Spoons’ acquisition of AOL shows the value of legacy platforms

October 30, 2025
Add A Comment

Comments are closed.

Top Posts

CEO of stablecoin giant Circle says international law needs to be updated for a “machine-governed economic system”

October 28, 20256 Views

Bending Spoons’ acquisition of AOL shows the value of legacy platforms

October 30, 20255 Views

Build a healthcare robot from simulation to deployment with NVIDIA Isaac

October 30, 20255 Views
Stay In Touch
  • YouTube
  • TikTok
  • Twitter
  • Instagram
  • Threads
Latest Reviews

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

Most Popular

CEO of stablecoin giant Circle says international law needs to be updated for a “machine-governed economic system”

October 28, 20256 Views

Bending Spoons’ acquisition of AOL shows the value of legacy platforms

October 30, 20255 Views

Build a healthcare robot from simulation to deployment with NVIDIA Isaac

October 30, 20255 Views
Don't Miss

Deprecating Git authentication using passwords

November 1, 2025

AI for crisis response: San Francisco event highlights practical applications and business opportunities | AI News Details

October 31, 2025

How LeapXpert uses AI to bring order and oversight to business messaging

October 31, 2025
Service Area
X (Twitter) Instagram YouTube TikTok Threads RSS
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
© 2025 Versa AI Hub. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.

Sign In or Register

Welcome Back!

Login to your account below.

Lost password?