Close Menu
Versa AI hub
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

What's Hot

StarCoder2 and Stack V2

July 4, 2025

Intel®Gaudi®2AI Accelerator Text Generation Pipeline

July 3, 2025

CAC has announced AI-powered business registration portal – thisdaylive

July 3, 2025
Facebook X (Twitter) Instagram
Versa AI hubVersa AI hub
Friday, July 4
Facebook X (Twitter) Instagram
Login
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
Versa AI hub
Home»Art Generation»AI art protection tools still put creators at risk, researchers say
Art Generation

AI art protection tools still put creators at risk, researchers say

versatileaiBy versatileaiApril 20, 2019No Comments4 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
#image_title
Share
Facebook Twitter LinkedIn Pinterest Email

So, say a team of researchers who have revealed the significant weaknesses of the two arts conservation tools that artists most often use to protect their work.

According to their creators, glaze and night shades were developed to protect human creatives from the invasive use of generative artificial intelligence.

This tool is popular with digital artists who want to stop artificial intelligence models (such as the stable spread of AI art generators), and cannot copy their own styles without consent. Together, Glaze and Night Shade have been downloaded almost 9 million times.

However, according to an international group of researchers, these tools have significant weaknesses, meaning that AI models cannot ensure that they cannot stop the artists from training their work.

This tool adds subtle, invisible distortions (known as addictive perturbations) to digital images. These “poisons” are designed to confuse AI models during training. Glaze takes a passive approach and hinders the ability to extract key style features of AI models. Nightshades go further, actively breaking the learning process by associating the artist’s style with an AI model with unrelated concepts.

However, researchers have created a method (called Lightshed) that can bypass these protections. Lightshed can detect, reverse engineer and remove these distortions, effectively strip the poison and make the images available for use again for generating AI model training.

It was developed by researchers at Cambridge University and with colleagues at the Institute of Technology Darmstadt and the University of Texas at San Antonio. Researchers hope to be able to inform creatives of the major issues with art protection tools by publishing the work being presented at the Usenix Security Symposium, a major security conference in August.

Lightshed works using a three-stage process. First, identify whether images have been altered by known addiction techniques.

In the second reverse engineering step, we use publicly published examples of poisons to learn the properties of perturbations. Finally, remove the poison and restore the image to its original, unprotected form.

In the experimental evaluation, Lightshed detected nightshade protection images with an accuracy of 99.98% and effectively removed embedded protection from those images.

“This shows that even when using tools like nightshades, there is a risk of work that artists use to train AI models without their consent,” said Hanna Foerster, the first author of Cambridge’s Ministry of Computer Science and Technology, who worked during his internship at Tu Darmstadt.

Lightshed reveals serious vulnerabilities in art protection tools, but researchers emphasize that it is not an attack on them, but a call for urgent action to produce something better, more adaptive.

“We see this as an opportunity to co-evolve defense,” said Professor Ahmad Reza Sadegi, co-author of Darmstadt Institute of Technology. “Our goal is to support the arts community, working with other scientists in the field to develop tools that can withstand advanced adversaries.”

AI landscapes and digital creativity are evolving rapidly. In March of this year, Openai launched a ChatGpt image model that allows you to instantly create artwork in the style of Studio Ghibli, a Japanese animation studio.

This has led to a wide range of viral memes. There was also an equally widespread discussion regarding image copyright. This has pointed out that legal analysts will limit how Studio Ghibli responds to this, as copyright law protects certain expressions rather than specific artistic “styles.”

Following these discussions, Openai has announced a quick safeguard to block some user requests to generate images in the style of living artists.

However, as highlighted by copyright and trademark infringement cases currently being heard in the London High Court, issues relating to generative AI and copyright are ongoing.

Global Photography Agency Getty Images claims that London-based AI company Stability AI has trained an image generation model for a huge archive of copyrighted photographs. Stability AI is fighting Getty’s claims, claiming that the case represents a “obvious threat” to the generative AI industry.

And earlier this month, Disney and Universal announced that they were suing AI company Mid Journey for its image generator. Both companies said it was a “bottomless hole in plagiarism.”

“What we want to do at work is to highlight the urgent need for a roadmap for a more resilient, artist-centric conservation strategy,” Foerster said. “We need to let creatively inform them that they are still at risk and work with others to develop better arts conservation tools in the future.”

Hanna Foerster is a member of Downing College, Cambridge.

reference:
Hanna Foerster et al. “Lightshed: defeating perturbation-based image copyright protection.” A paper presented at the 34th USENIX Security Symposium. https://www.usenix.org/conference/usenixsecurity25/presentation/foerster

author avatar
versatileai
See Full Bio
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleArtificial Intelligence Inventory Survey – April 10th
Next Article 6 AI-related security trends to watch in 2025
versatileai

Related Posts

Art Generation

AI Art Challenge: Everyday Giants will showcase the creativity of AI generated in 2025 | AI News Details

July 2, 2025
Art Generation

AI Art Generation Using Primo Models: Converting Digital Illustrations for Creators | AI News Details

July 1, 2025
Art Generation

Best Free AI Art Generator for 2025 (No Sign Up)

June 26, 2025
Add A Comment

Comments are closed.

Top Posts

Impact International | EU AI ACT Enforcement: Business Transparency and Human Rights Impact in 2025

June 2, 20251 Views

Presight plans to expand its AI business internationally

April 14, 20251 Views

PlanetScale Vectors GA: MySQL and AI Database Game Changer

April 14, 20251 Views
Stay In Touch
  • YouTube
  • TikTok
  • Twitter
  • Instagram
  • Threads
Latest Reviews

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

Most Popular

Impact International | EU AI ACT Enforcement: Business Transparency and Human Rights Impact in 2025

June 2, 20251 Views

Presight plans to expand its AI business internationally

April 14, 20251 Views

PlanetScale Vectors GA: MySQL and AI Database Game Changer

April 14, 20251 Views
Don't Miss

StarCoder2 and Stack V2

July 4, 2025

Intel®Gaudi®2AI Accelerator Text Generation Pipeline

July 3, 2025

CAC has announced AI-powered business registration portal – thisdaylive

July 3, 2025
Service Area
X (Twitter) Instagram YouTube TikTok Threads RSS
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
© 2025 Versa AI Hub. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.

Sign In or Register

Welcome Back!

Login to your account below.

Lost password?