Since its debut almost a year ago, ChatGPT has sparked a surge in the generative AI industry. However, this rise has also brought about opposition, with artists and record labels filing lawsuits against AI companies, including OpenAI, for unauthorized use of their work in training data. These AI models heavily rely on large amounts of multimedia content, including written material and images, without the artists’ consent.
One controversial practice is data scraping, where AI model training datasets are compiled from material found on the web. Artists who previously supported this practice for search result indexing are now speaking out against it, as it enables AI to generate competing work.
Despite legal action, artists can fight back against AI through technology. MIT Technology Review recently showcased an upcoming open-source tool called Nightshade. Designed to be added to artists’ imagery before uploading them online, Nightshade subtly alters pixels in a way that remains imperceptible to the human eye but disrupts AI models attempting to train on the images.
Nightshade was developed by researchers at the University of Chicago, under the guidance of computer science professor Ben Zhao. It serves as an extension of their existing tool Glaze, which can alter digital artwork to confuse AI models about its style. Nightshade goes a step further by causing AI models to learn incorrect object and scenery names. For example, by poisoning images of dogs, the AI perceives them as cats. After learning from just a small number of poisoned samples, the AI began generating distorted images of dogs with unusual appearances. With more poisoned samples, the AI even generated near-perfect images of dogs when prompted for a cat.
The researchers utilized Stable Diffusion, an open-source text-to-image generation model, to test Nightshade. Due to the nature of generative AI models, Nightshade was able to manipulate the embeddings and prompt the AI to generate cats when given words like “husky,” “puppy,” and “wolf.”
Defending against Nightshade’s data poisoning technique poses a challenge for AI developers, as the poisoned pixels are intentionally difficult to detect, both by humans and software scraping tools. Existing poisoned images within AI training datasets would need to be identified and removed, potentially requiring retraining of the AI models.
While the researchers acknowledge the potential for malicious use, they hope that Nightshade will shift the power balance back to artists, deterring AI companies from disrespecting copyright and intellectual property. They have submitted a paper on Nightshade for peer review at the upcoming Usenix computer security conference.
VentureBeat aims to be a digital hub for technical decision-makers to access knowledge about transformative enterprise technology, including AI. Discover more by subscribing to our Briefings.

I’m a highly experienced and respected author in the field of cryptocurrency. I have been writing about Bitcoin, Ethereum, Litecoin and other digital currencies for over 5 years which is widely regarded as one of the most knowledgeable and reliable sources of information in this area.