So, say a team of researchers who have revealed the significant weaknesses of the two arts conservation tools that artists most often use to protect their work.
According to their creators, glaze and night shades were developed to protect human creatives from the invasive use of generative artificial intelligence.
This tool is popular with digital artists who want to stop artificial intelligence models (such as the stable spread of AI art generators), and cannot copy their own styles without consent. Together, Glaze and Night Shade have been downloaded almost 9 million times.
However, according to an international group of researchers, these tools have significant weaknesses, meaning that AI models cannot ensure that they cannot stop the artists from training their work.
This tool adds subtle, invisible distortions (known as addictive perturbations) to digital images. These “poisons” are designed to confuse AI models during training. Glaze takes a passive approach and hinders the ability to extract key style features of AI models. Nightshades go further, actively breaking the learning process by associating the artist’s style with an AI model with unrelated concepts.
However, researchers have created a method (called Lightshed) that can bypass these protections. Lightshed can detect, reverse engineer and remove these distortions, effectively strip the poison and make the images available for use again for generating AI model training.
It was developed by researchers at Cambridge University and with colleagues at the Institute of Technology Darmstadt and the University of Texas at San Antonio. Researchers hope to be able to inform creatives of the major issues with art protection tools by publishing the work being presented at the Usenix Security Symposium, a major security conference in August.
Lightshed works using a three-stage process. First, identify whether images have been altered by known addiction techniques.
In the second reverse engineering step, we use publicly published examples of poisons to learn the properties of perturbations. Finally, remove the poison and restore the image to its original, unprotected form.
In the experimental evaluation, Lightshed detected nightshade protection images with an accuracy of 99.98% and effectively removed embedded protection from those images.
“This shows that even when using tools like nightshades, there is a risk of work that artists use to train AI models without their consent,” said Hanna Foerster, the first author of Cambridge’s Ministry of Computer Science and Technology, who worked during his internship at Tu Darmstadt.
Lightshed reveals serious vulnerabilities in art protection tools, but researchers emphasize that it is not an attack on them, but a call for urgent action to produce something better, more adaptive.
“We see this as an opportunity to co-evolve defense,” said Professor Ahmad Reza Sadegi, co-author of Darmstadt Institute of Technology. “Our goal is to support the arts community, working with other scientists in the field to develop tools that can withstand advanced adversaries.”
AI landscapes and digital creativity are evolving rapidly. In March of this year, Openai launched a ChatGpt image model that allows you to instantly create artwork in the style of Studio Ghibli, a Japanese animation studio.
This has led to a wide range of viral memes. There was also an equally widespread discussion regarding image copyright. This has pointed out that legal analysts will limit how Studio Ghibli responds to this, as copyright law protects certain expressions rather than specific artistic “styles.”
Following these discussions, Openai has announced a quick safeguard to block some user requests to generate images in the style of living artists.
However, as highlighted by copyright and trademark infringement cases currently being heard in the London High Court, issues relating to generative AI and copyright are ongoing.
Global Photography Agency Getty Images claims that London-based AI company Stability AI has trained an image generation model for a huge archive of copyrighted photographs. Stability AI is fighting Getty’s claims, claiming that the case represents a “obvious threat” to the generative AI industry.
And earlier this month, Disney and Universal announced that they were suing AI company Mid Journey for its image generator. Both companies said it was a “bottomless hole in plagiarism.”
“What we want to do at work is to highlight the urgent need for a roadmap for a more resilient, artist-centric conservation strategy,” Foerster said. “We need to let creatively inform them that they are still at risk and work with others to develop better arts conservation tools in the future.”
Hanna Foerster is a member of Downing College, Cambridge.
reference:
Hanna Foerster et al. “Lightshed: defeating perturbation-based image copyright protection.” A paper presented at the 34th USENIX Security Symposium. https://www.usenix.org/conference/usenixsecurity25/presentation/foerster