“Artists Fight Back Against AI Copycats Using Tech Weapons”

Summary:
Artists, facing challenges from AI that mimics their styles, have joined forces with university researchers to counter such copycat endeavors. They are utilizing Glaze, a free software developed by researchers at the University of Chicago.

In response to artificial intelligence (AI) studying and replicating their artistic styles, artists have joined forces with university researchers to combat such copycat activities.

Paloma McClain, a US illustrator, took defensive action upon discovering that various AI models had been “trained” using her art without any acknowledgment or compensation.

“It bothered me,” McClain told AFP.

“I believe truly meaningful technological advancement is done ethically and elevates all people instead of functioning at the expense of others.”

To counter the AI threat, the artist utilized free software called Glaze, developed by researchers at the University of Chicago. Glaze has the ability to outsmart AI models during their training process by subtly adjusting pixels in a manner imperceptible to human viewers but significantly altering the appearance of digitized art to AI.

Professor of computer science Ben Zhao, part of the Glaze team, stated, “We’re basically providing technical tools to help protect human creators against invasive and abusive AI models.”

Developed in a mere four months, Glaze emerged from technology designed to thwart facial recognition systems. The urgency stemmed from the seriousness of the issue as artists faced threats from software imitators, prompting a swift response.

“A lot of people were in pain,” mentioned Zhao, highlighting the pressing need for a solution.

While generative AI giants may have agreements for data usage in some instances, a significant portion of the digital content, including images, audio, and text, shaping the training of highly intelligent software has been scraped from the internet without explicit consent.

Since its launch in March 2023, Glaze has garnered over 1.6 million downloads, as reported by Zhao. The team is actively enhancing Glaze with a feature called Nightshade, aimed at strengthening defenses by introducing confusion for AI—for instance, making it interpret a dog as a cat.

“I believe Nightshade will have a noticeable effect if enough artists use it and put enough poisoned images into the wild,” remarked McClain, referring to easily accessible online platforms. According to Nightshade’s research, the impact can be significant even with fewer “poisoned” images than one might anticipate.

Zhao’s team has garnered interest from multiple companies seeking to leverage Nightshade, as confirmed by the Chicago academic. “The goal is for people to be able to protect their content, whether it’s individual artists or companies with a lot of intellectual property,” Zhao emphasized.

Viva Voce

Startup Spawning has introduced Kudurru software, capable of detecting efforts to amass a large number of images from an online source. In response, artists can block access or provide misleading images, disrupting the data pool used to instruct AI. Over a thousand websites are already integrated into the Kudurru network. Additionally, Spawning has launched haveibeentrained.com, offering an online tool for artists to check if their works have been used in AI models and opt out of future usage.

As defenses fortify against image replication, researchers at Washington University in Missouri have developed AntiFake software designed to thwart AI attempting to mimic voices.

As defenses strengthen against image replication, researchers at Washington University in Missouri have developed AntiFake software specifically designed to counter AI attempts to mimic voices.

AntiFake enhances digital recordings of people speaking by incorporating inaudible noises for humans but making it “impossible to synthesize a human voice,” as described by Zhiyuan Yu, the PhD student leading the project.

This program goes beyond preventing unauthorized AI training; its objective is to thwart the creation of “deepfakes”—fabricated soundtracks, videos, or audios of individuals, including celebrities or politicians, falsely portraying them doing or saying something they never did.

Zhiyuan Yu shared that a popular podcast recently sought help from the AntiFake team to prevent its productions from being exploited.

While AntiFake software has currently been employed for enhancing recordings of people speaking, its potential extends to songs, according to the researcher.

In envisioning an ideal scenario, Meyer suggested, “The best solution would be a world in which all data used for AI is subject to consent and payment.” He expressed the hope that their work would nudge developers in the direction of prioritizing consent and compensation in the realm of AI data usage.

Leave a Reply

Your email address will not be published. Required fields are marked *