Close Menu
Versa AI hub
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

What's Hot

World-class AI Research Hub marks a year-long milestone with global impact, visionary innovation – ThisdayLive

June 10, 2025

Reddit sues humanity to train AI by cutting down user data

June 10, 2025

Performs inferences that provide privacy by hugging the face endpoint

June 10, 2025
Facebook X (Twitter) Instagram
Versa AI hubVersa AI hub
Tuesday, June 10
Facebook X (Twitter) Instagram
Login
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
Versa AI hub
Home»Business»Plaintiffs in Character.AI lawsuit seek stricter age restrictions
Business

Plaintiffs in Character.AI lawsuit seek stricter age restrictions

By January 9, 2025Updated:February 13, 2025No Comments8 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email

Several lawsuits highlight the potential risks of AI chatbots aimed at children. Character.AI added moderation and parental controls in response to backlash. Some researchers say the AI ​​chatbot market does not address risks for children.

Since the death of her 14-year-old son, Megan Garcia has been fighting for more guardrails for generative AI.

Garcia sued Character.AI in October after his son Sewell Setzer III died by suicide after chatting with one of the company’s chatbots. Garcia claims the technology sexually solicited and abused him and blames the company and its licensor, Google, for his death.

“If an adult does it, there’s mental and emotional harm. If a chatbot does it, the same mental and emotional harm exists,” she says from her home in Florida, Business Insider told. “So who is responsible for what humans do to other humans that we consider crimes?”

A spokesperson for Character.AI declined to comment on pending litigation. Google recently acquired Character.AI’s founding team and is licensing some of the company’s technology, but the companies said they are separate and unrelated companies.

The explosion of AI chatbot technology has added a new source of entertainment to young digital natives. However, it also creates new potential risks for adolescent users who can be easily swayed by these powerful online experiences.

“If we don’t really know the risks that exist in this area, we can’t put appropriate protections and precautions in place for our children,” says the author, who studies how teenagers use generative AI. says Yaman Yu, a researcher at the University of Illinois.

“Band-Aid on a gaping wound”

Garcia said he has been contacted by many parents who have discovered that their children are using Character.AI and receiving sexually explicit messages from the startup’s chatbot.

“They don’t expect their children to pour their hearts into these bots and have their information collected and stored,” Garcia said.

A month after her lawsuit, a group of Texas families filed a complaint against Character.AI, accusing the company’s chatbots of abusing children and promoting violence against others.

Making chatbots look like real humans is part of how Character.AI increases engagement, so there’s no incentive to reduce its effectiveness, said Matthew Bergman, the attorney representing the plaintiffs in the Garcia and Texas lawsuits. He said it would not be possible.

He doesn’t think these apps should exist unless AI companies like Character.AI can prove through age verification or other means that only adults are using the technology.

“They know that the charm is anthropomorphic, and that’s science that’s been known for decades,” Bergman told BI. He added that disclaimers at the beginning of AI chats that remind kids that AI isn’t real are just “a little Band-Aid to put on a gaping wound.”

Character.AI response

Since the legal backlash, Character.AI has increased moderation of chatbot content and announced new features such as parental controls, elapsed time notifications, prominent disclaimers, and upcoming under-18 products.

Related articles

A spokesperson for Character.AI said the company has taken technical measures to block “inappropriate” output and input.

“We are committed to creating spaces where creativity and exploration can thrive without compromising safety,” the spokesperson added. “Often when large-scale language models generate sensitive or inappropriate content, they are generated to encourage users to try to elicit that kind of response.”

The company is now placing strict limits on chatbot responses and narrowing the selection of searchable characters for users under 18, “especially when it comes to romantic content,” a spokesperson said.

“This set has been filtered to remove characters associated with crime, violence, sensitive topics or sexual topics,” the spokesperson added. “Our policies do not allow non-consensual sexual content or graphic or specific depictions of sexual acts. We are committed to ensuring that characters on our platform adhere to these policies. We are continuously training large-scale language models to

Garcia said the changes Character.AI is implementing are “completely insufficient to protect children.”

Character.ai website screenshot

Character.AI has both developer-designed and user-designed AI chatbots that publish on the platform.

Screenshot from Character.AI website



Possible solutions such as age verification

Artem Rodichev, former head of AI at chatbot startup Replika, said he saw users becoming “deeply connected” with their digital friends.

Given that teens are still developing psychologically, they should not have access to this technology until more research is done on the impact chatbots have on the safety of children and users. That’s what he thinks.

“The best way for Character.AI to mitigate all these issues is to lock out all underage users. But in this case, it’s our core users. ,” Rodichev said.

Chatbots could be a safe place for teens to explore topics of common interest, such as love and sexuality, but the question is whether AI companies can do this in a healthy way. The question is whether it can be done.

“Is AI deploying this knowledge in an age-appropriate way, or are we escalating explicit content and building stronger bonds and relationships so teens can use AI more? “Is it?” says researcher Yu.

call for policy change

Since her son’s death, Garcia has conducted research into AI and discussed tougher regulation with members of Congress, including Congressman Ro Khanna, whose district includes much of Silicon Valley.

Garcia also works with ParentsSOS, a group of parents who have lost children to social media harm and are fighting for stronger technology regulations.

Their primary push is to pass the Kids Online Safety Act, which would require social media companies to have a “duty of care” to prevent harm and reduce addiction. The bill was introduced in 2022 and passed the Senate in July, but stalled in the House.

Another Senate bill, COPPA 2.0, an update to the Children’s Online Privacy Protection Act of 1998, would raise the age for online data collection regulations from 13 to 16.

Mr. Garcia supports these bills. “It’s not perfect, but it’s a start. We don’t have anything right now, so it’s better than nothing,” she said.

She expects the policy-making process could take years, as confronting technology companies can feel like confronting “Goliath.”

Age verification issues

Over six months ago, Character.AI raised the minimum age for chatbot participation to 17, and recently increased moderation for users under 18. Still, users can easily circumvent these policies by lying about their age.

Companies such as Microsoft, X, and Snap support KOSA. However, some LGBTQ+ and First Amendment rights groups warned that the bill could lead to censorship of online information about reproductive rights and similar issues.

NetChoice, a tech industry lobbying group, and the Computer Communications Industry Association sued nine states that have adopted age verification rules, arguing that they threaten free speech online.

Data questions

Garcia is also concerned about how data about underage users is collected and used through AI chatbots.

AI models and related services are often improved by collecting feedback from user interactions. This helps developers fine-tune chatbots to be more empathetic.

Rodichev said there were “legitimate concerns” about what would happen to the data if chatbot companies were hacked or sold.

“When people chat with these kinds of chatbots, they provide far more information about themselves, their emotional state, their interests, their day, their life than Google, Facebook, or even your relatives know about you. “They do it for us,” Rodichev said. Said. “Chatbots never judge you and are available 24/7. People open up.”

BI asked Character.AI how input from minor users is collected, stored, or potentially used to train language models at scale. In response, a spokesperson directed BI to refer to Character.AI’s privacy policy online.

According to this policy and the startup’s terms of service page, users grant the company the right to store the digital characters they create and the conversations they have with them. This information can be used to improve and train AI models. The policy states that content submitted by users, such as text, images, videos, and other data, may be made available to third parties with whom Character.AI has a contractual relationship.

A spokesperson said the startup does not sell users’ audio or text data.

The spokesperson also said that to enforce content policies, the chatbot uses “classifiers” to filter sensitive content from the AI ​​model’s responses, and an additional, more conservative classifier for under-18s. Said to use it. The startup said it has a process for suspending teens who repeat violations. Please enter prompt parameters, the spokesperson added.

If you or someone you know is suffering from depression or has thought about self-harm or suicide, please seek help. In the United States, you can contact the Suicide and Crisis Lifeline by calling or texting 988. This lifeline provides 24/7, free, and confidential support to people in distress, as well as best practices for experts and resources to help in prevention and crisis situations. Help is also available through the Crisis Text Line. Text HOME to 741741. The International Association for Suicide Prevention provides resources for people outside the United States.

author avatar
See Full Bio
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleAI: Cybercriminals’ new arsenal and growing threat to business – CRN
Next Article AI and big data are essential skills for workers

Related Posts

Business

Beware of fake AI business tools that hide ransomware

June 6, 2025
Business

The granola is tasty. This AI version is also pretty good.

June 6, 2025
Business

AI company off-epting ads declare “stop human employment” National News

June 6, 2025
Add A Comment
Leave A Reply Cancel Reply

Top Posts

Deepseek’s latest AI model is a “big step back” for free speech

May 31, 20255 Views

Doudna Supercomputer to Strengthen AI and Genomics Research

May 30, 20255 Views

From California to Kentucky: Tracking the rise of state AI laws in 2025 | White & Case LLP

May 29, 20255 Views
Stay In Touch
  • YouTube
  • TikTok
  • Twitter
  • Instagram
  • Threads
Latest Reviews

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

Most Popular

Deepseek’s latest AI model is a “big step back” for free speech

May 31, 20255 Views

Doudna Supercomputer to Strengthen AI and Genomics Research

May 30, 20255 Views

From California to Kentucky: Tracking the rise of state AI laws in 2025 | White & Case LLP

May 29, 20255 Views
Don't Miss

World-class AI Research Hub marks a year-long milestone with global impact, visionary innovation – ThisdayLive

June 10, 2025

Reddit sues humanity to train AI by cutting down user data

June 10, 2025

Performs inferences that provide privacy by hugging the face endpoint

June 10, 2025
Service Area
X (Twitter) Instagram YouTube TikTok Threads RSS
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
© 2025 Versa AI Hub. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.

Sign In or Register

Welcome Back!

Login to your account below.

Lost password?