Stability AI plans to let artists opt out of Stable Diffusion 3 image training

An AI-generated image of someone leaving a building.

Enlarge / An AI-generated image of a person leaving a building, thus opting out of the vertical blinds convention. (credit: Ars Technica)

On Wednesday, Stability AI announced it would allow artists to remove their work from the training dataset for an upcoming Stable Diffusion 3.0 release. The move comes as an artist advocacy group called Spawning tweeted that Stability AI would honor opt-out requests collected on its Have I Been Trained website. The details of how the plan will be implemented remain incomplete and unclear, however.

As a brief recap, Stable Diffusion, an AI image synthesis model, gained its ability to generate images by “learning” from a large dataset of images scraped from the Internet without consulting any rights holders for permission. Some artists are upset about it because Stable Diffusion generates images that can potentially rival human artists in an unlimited quantity. We’ve been following the ethical debate since Stable Diffusion’s public launch in August 2022.

To understand how the Stable Diffusion 3 opt-out system is supposed to work, we created an account on Have I Been Trained and uploaded an image of the Atari Pong arcade flyer (which we do not own). After the site’s search engine found matches in the Large-scale Artificial Intelligence Open Network (LAION) image database, we right-clicked several thumbnails individually and selected “Opt-Out This Image” in a pop-up menu.

Read 6 remaining paragraphs | Comments

Source

Leave a Reply

Your email address will not be published. Required fields are marked *