Computer

Artists will soon be able to sabotage AI image generators

In a world increasingly dominated bygenerative artificial intelligenceconcept artists and illustrators face a growing challenge in defending their work from misuse by the companies that use it without consent or compensation to train your text-to-image AI models.

However, a new tool called “Nightshade” promises to restore the power that belongs to artists who will be able to safeguard their digital works by cunningly sabotaging AI image generatorssuch as DALL-E, Midjourney and Stable Diffusion.

Nightshade poisons image generator training data

Il MIT Technology Review got an exclusive preview of the team’s research led by the University of Chicago professor Ben Zhao, peer-reviewed at the Usenix cybersecurity conference. The hope is that Nightshade can act as a powerful deterrent against the lack of respect for the copyright and intellectual rights of artists.

This tool works by introducing minimal alterations to the pixels of a digital artwork, effectively “poisoning” the image as it is unusable for artificial intelligence training sessions. Although these changes remain imperceptible to the human eye, they lead AI algorithms to identify completely incorrectly the image.

For example, consider a digital artwork depicting a cat, unmistakably recognizable as a feline to both humans and AI. However, once Nightshade is applied, while humans will perceive the same image, AI systems will misinterpret it like a dog. When applied on a larger scale, Nightshade’s impact on AI becomes even more astonishing.

A powerful incentive for ethical use of text-to-image models

By flooding the algorithms with these subtly poisoned images, a request for an image of a cat, for example, could lead to the generation of a dog. Although a single poisoned image cannot significantly influence an AI image generator, the cumulative effect becomes evident when thousands of altered images are introduced into the dataof training.

AI image generators regularly collect samples from the Internet to refine their algorithms. Therefore, if numerous artists uploaded their work with Nightshade applied, this could render these tools unusable. The more poisoned images that can be inserted into the model, the more damage the technique will cause.

The challenge for AI companies like OpenAI, Meta, Google by Stability AI who are facing a series of lawsuits filed by artists claiming copyright, would be to identify and remove every single poisoned image from your training dataset. A situation that could push them to reconsider using artists’ work without explicit consent, creating a powerful incentive for ethical use practices of AI.

How to defend works of art from artificial intelligence

Nightshade is not the first tool developed by Professor Ben Zhao’s team to cause damage or at least disturbance to artificial intelligence. Previously, they had introduced “Glaze“, a tool designed for mask an artist’s personal style which is based on a methodology similar to that of Nightshade.

In practice, it modifies the pixels of the image in a way that is imperceptible to human perception but sufficient to cause machine learning models to misinterpret the represented content. The team plans to integrare Nightshade in Glazemaking it open source to encourage other users to refine the software and give life to even more powerful versions in protecting the intellectual property of artists.

to know more: Artificial Intelligence: what it is and what it can do for us

Leave a Reply

Your email address will not be published. Required fields are marked *