Close Menu
Versa AI hub
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
  • Resources

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

What's Hot

Co-building an open agent ecosystem: Introducing OpenEnv

October 23, 2025

Investigate top AI security threats

October 23, 2025

Bun Transformers joins Hug Face!

October 22, 2025
Facebook X (Twitter) Instagram
Versa AI hubVersa AI hub
Friday, October 24
Facebook X (Twitter) Instagram
Login
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
  • Resources
Versa AI hub
Home»AI Legislation»Image of childhood abuse generated by AI, targeted by new law
AI Legislation

Image of childhood abuse generated by AI, targeted by new law

By February 1, 2025Updated:February 13, 2025No Comments4 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email

The government has announced that four new laws will work on the threat of child abuse images generated by artificial intelligence (AI).

The Ministry of Home Affairs states that the UK will be the world’s first country. This is illegal to own, create, or distribute AI tools designed to create child abuse (CSAM), and punish up to five years of prison.

Owring an AI pediatric manual that teaches people how to use AI for sexual abuse is also illegal, and criminals will enter prison for up to three years.

“We are seeing that AI is currently putting online child abuse in steroids,” said Laura Quensberg on Sunday of BBC.

Cooper stated that AI was “industrialized” in sexual abuse of children, and government measures may “go further.”

Other laws that are introduced include crime to operate a website that can share the content of childhood abuse and provide advice on how to gloomy children. It will be punished for up to 10 years.

Also, CSAM is often shot overseas, so when trying to enter the UK, individuals who are suspected of bringing sexual risk to children, individuals who are likely to bring sexual risk to children, children. Give the authority to indicate to unlock the digital device. Depending on the degree of images, this is punished in a prison for up to three years.

The artificially generated CSAM contains partial or completely computer -generated images. The software allows you to create a real image by “nudy” and replacing a child’s face with another child.

In some cases, the real voice of the child is also used. In other words, an innocent survivor has been revisited.

Fake images are also used to threaten children and forced the victim to further abuse.

The National Criminal Agency (NCA) stated that there were 800 arrests per month related to the threats proposed to online children. 840,000 adults stated that both online and offline are threats to children nationwide. This accounts for 1.6 % of the adult population.

Cooper says: “There are perpetrators who use AI to intimidate teenagers and children using AI, distort images, and use images to further attract young people.

She continued: “This is an area where the technology is not stationary, and our reactions cannot be stationed to keep the children safer.”

However, some experts believe that the government will be able to proceed further.

Professor Claire McGulin, a specialist in pornography, sexual violence, and online abuse, said this change was “welcome,” but there is an “important gap.”

The government said that the government should “nudy” and work on “normalization of sexual activities with young girls on the mainstream porn site.”

She said that these videos are displayed in the bedrooms of children, including toys, toys, pigtails, braces, and other children’s bedrooms, although they contain adult actors. ” “This material can be found in the most obvious search terms, justification and normalizes children’s sexual abuse. Unlike many other countries, this material is legal in the UK.”

Internet Watch Foundation (IWF) warns that children have more sexual abuse AI images and are more common on the open web.

The latest data from charity indicates that the CSAM reported CSAM report rose 380 % in 2024 reports in 2024 compared to 51 in 2023. Each report contains thousands of images.

Last year’s research showed that over a month, 3,512 AI children’s sexual abuse and exploitation images were discovered on a dark website. Compared to the month of the previous year, the number of category images (category A) increased by 10 %.

Experts often say that AI CSAM often looks very realistic and is difficult to convey the reality from fake.

Derek Ray-Hill, the provisional highest executive officer of IWF, said:

“It causes abuse, encourage abuse, and reduce actual children’s safety. There is certainly something to do to prevent AI technology from being abused, but we (THE). We welcome the presentation, and we believe that these measures are important. “

Lin Perry, the highest executive officer of the child’s charitable group Barnardo, produced an AI that “normalizes children’s abuse, is exposed to more risks, and abuse children on both offline.” We welcomed the government’s actions to work on.

“It’s important for the law to maintain technology progress to prevent these terrible crimes,” she added.

“High -tech companies need to confirm that the platform is safe for children. It is necessary to take action to introduce more powerful protection measures, and OFCOM is implemented in an online safety law effectively and robust. I have to do it. “

The announced new measures will be introduced in the coming weeks as part of the crime and police bills for Congress.

author avatar
See Full Bio
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleThe legislative imagination must match the importance of AI -Marine Independent Journal
Next Article The AI ​​tools used to generate images of child abuse have become illegal by “leading the world” | Political News

Related Posts

AI Legislation

California’s AI law sets national trends

October 22, 2025
AI Legislation

Newsom vetoes bill restricting minors’ access to AI chatbots, signs Transparency Act instead

October 19, 2025
AI Legislation

Newsom rejects AI chatbot bill but approves of transparency | Technology

October 19, 2025
Add A Comment

Comments are closed.

Top Posts

Paris AI Safety Breakfast #3: Yoshua Bengio

February 13, 20256 Views

WhatsApp blocks AI chatbots to protect business platform

October 19, 20254 Views

Veo 3.1 model update: Enhanced realism and richer audio for creators now available via Gemini API and Google Cloud | AI News Details

October 21, 20253 Views
Stay In Touch
  • YouTube
  • TikTok
  • Twitter
  • Instagram
  • Threads
Latest Reviews

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

Most Popular

Paris AI Safety Breakfast #3: Yoshua Bengio

February 13, 20256 Views

WhatsApp blocks AI chatbots to protect business platform

October 19, 20254 Views

Veo 3.1 model update: Enhanced realism and richer audio for creators now available via Gemini API and Google Cloud | AI News Details

October 21, 20253 Views
Don't Miss

Co-building an open agent ecosystem: Introducing OpenEnv

October 23, 2025

Investigate top AI security threats

October 23, 2025

Bun Transformers joins Hug Face!

October 22, 2025
Service Area
X (Twitter) Instagram YouTube TikTok Threads RSS
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
© 2025 Versa AI Hub. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.

Sign In or Register

Welcome Back!

Login to your account below.

Lost password?