The government has announced that the AI tools designed to generate childhood abuse materials (CSAM) will be illegal under the “leading world” law.
The crackdown is targeted for those who own the AI’s “Pediatric Manual” and teach people how to use AI to sexually abuse children.
This is “unrest and realistic” after warning that the image of child abuse generated in AI is generated in “Charlin Great”.
Owring a CSAM generated in AI is already illegal, but new law targets production.
This includes the following:
• Owring, creating and distributing AI tools designed to generate CSAM is illegal to be punished in a prison for up to five years.
• Ownership of AI’s “Pediatric Manual” is illegal to teach people how to use AI to use AI punished in a prison for up to three years.
The Minister of Protection, Jess Philips, said that Britain was the “world’s first country” to legislate the image of AI abuse.
She said: “This is a global problem and will need global solutions. The government is on the way to crack down on this terrible crime.”
On Sunday morning, Trevar Philips said, Interior Secretary Ivette Cooper, who described AI as “steroids” and “true disturbing phenomena”.
“It’s escalating abuse and accelerating abuse,” she said.
In the home office, the AI tools generate abuse images in various ways, such as “nude” the actual image of a child or sewing another child’s face in an existing child’s sexual abuse image. He said it was used.
NSPCC said that the child line service was heard from the suffering children who found their AI generated images.
At a call, a 15 -year -old girl told them: Editing, I am afraid they send them to my parents.
The perpetrators are also intimidating children using fake images and forcing victims to further abuse, including streaming live images.
Home office said that the perpetrator could use the AI tool to disguise the first identity, grow and abuse grooms more effectively online.
Philips states: “It’s a large -scale battle. This is about to begin. This is not the end.”
The government has also announced that other pediatric patients will introduce certain crimes to predators who run a website designed to share the content of childhood abuse.
This is already illegal under the ownership and distribution method, but new crimes can make longer sentences and do not claim that the moderator knows what is on the site.
British border troops are also given to individuals who are likely to have sexual risk to their children to enforce new power to unlock digital devices for tests.
read more:
After Deepseek Market Shock, Microsoft hit AI as a spending focus on sharply.
From the United Kingdom, “Under a new plan from the priority of Keal IR”
All four measures are introduced for parliament as part of crime and police bills.
Secretary Ivet Cooper states: The latest threat. “
This occurs after the Internet Watch Foundation (IWF) warns the increase in sexual abuse images generated in AI.
For 30 days in 2024, IWF analysts have identified 3,512 AI CSAM images on a single dark website.
Compared to the 2023 analysis, the rate of illness in category A images (the most serious category) increased by 10 %.
IWF warns that some AI images are very realistic and difficult to distinguish them from actual abuse.
Derek Ray Hill, a provisional highest executive officer of a charity, said, “We’ve been calling for the law for a long time. We are pleased that the government has adopted our recommendations. These procedures have a concrete impact on online.