Tech

Steady popularity makes it harder for artists to copy and create pornography and users to go crazy


Diffuse Stabilization AI image generator users are angry about a update to the software that “reduces” the ability to generate NSFW output and images in the style of specific artists.

AI Stabilization, the company that sponsors and disseminates the software, announced Diffuse Stable Edition 2 early this morning European Time. The update redesigns key components of the model and improves some features such as upscaling (the ability to increase the resolution of an image) and in-painting (contextual editing). However, these changes also made it difficult for Stable Diffusion to create certain types of images that have attracted both controversy and criticism. These include nude and erotic output, authentic images of celebrities, and images that mimic artwork by specific artists.

“They screwed up the model”

“They screwed up the model,” One user commented on a sub-reddit of Diffuse Stable. “It was a nasty surprise,” say otherwise on the software’s official Discord server.

Users note that requesting Version 2 of Steady Diffusion produces images in the style of Greg Rutkowski — a digital artist named become a literal shorthand for creating high-quality images — no longer creating artwork that closely resembles my own. (Compare this two images for example). “What did you do with greg😔,” comment a user on Discord.

Changes to Stable Diffusion are notable, as the software have a great influence and help set standards in the rapidly evolving AI landscape. Unlike rival models like OpenAI’s DALL-E, Stable Diffusion is open source. This allows the community to quickly improve the tool and allows developers to integrate it into their products for free. But it also means that the Steady Diffusion has fewer restrictions on its use and has, as a result, drawn considerable criticism. In particular, many artists, like Rutkowski, are annoyed that Stable Diffusion and other visualization models have been trained on their artwork without their consent and can now copy copy their style. Is this type of AI-powered copying legal? something of an open question. Experts say training AI models on copyright-protected data may be legal, but certain use cases could be challenged in court.

A grid of images showing side-by-side comparisons of AI-generated artworks created with different versions of Stable Diffuse.
Compare the ability of Steady Diffuse to produce images that resemble specific artists.
Picture: lkewis via Reddit

Stable Diffusion users have speculated that the changes to the model were made by Stability AI to mitigate such potential regulatory challenges. However, when precipice Emad Mostaque, the founder of Stability AI, asked if this was the case in a private chat, Mostaque did not respond. Mostaque confirmed, although Stability AI did not remove the artists’ images from the training data (as many users speculated). Instead, the model’s reduced artist-replicability is the result of changes made to the way the software encodes and retrieves data.

“There is no specific artist filtering here,” says Mostaque. precipice. (He also expanded on the technical underpinnings of these changes in a Discord post.)

However, what was removed from Stable Diffusion’s training data were nudity and pornography. AI image generator was used to create NSFW . output, including realistic photos and anime-style photos. However, these models can also be used to create NSFW images that resemble specific individuals (so-called non-consensual pornography) and child abuse images.

Discussing the Diffuse Stable Version 2 changes in the software’s official Discord, Mostaque Note The latter use case is the reason to filter out NSFW content. “There can be no children & nsfw in an open model,” says Mostaque (since the two types of images can be combined to create child sexual abuse material), “so remove children or remove nsfw.”

One user on Stable Diffusion’s sub-reddit said the removal of NSFW content is “censorship” and “goes against the spiritual philosophy of the Open Source community”. The user said: “The choice of whether or not to implement NSFW content is up to the end user to decide, not [sic] in a restriction/censorship model.” However, others note that the open source nature of Stable Diffusion means that naked training data can easily re-added into third-party releases and new software that doesn’t affect previous versions: “Don’t worry about the lack of artists/NSFW V2.0, you’ll soon be able to create your beloved nude celebrity your liking and in any way you’ve been able to. “

While the changes to Diffuse Stable Version 2 have annoyed some users, many others have praised its potential for deeper functionality, as well as the software’s new ability to create output content that matches the depth of an existing image. Others say these changes make it harder to quickly generate high-quality images, but the community will likely add this functionality back in future versions. As a Discord user summary of changes: “2.0 is better at interpreting prompts and producing coherent snapshots in my experience so far. It won’t produce any rutkowski breasts, though.

Mostaque himself compared the new model to a pizza base that allows people to add ingredients (i.e. training data) of their choice. “A good model is accessible to everyone, and if you want more content, add content,” he says. say on Discord.

Mostaque also says future versions of Stable Diffusion will use a training dataset that allows artists to opt-in or opt-out — a feature many artists have requested and that could help reduce some criticism. “We are trying to be super transparent as we improve the base models and incorporate community feedback,” said Mostaque. precipice.

The public demo of Diffuse Stable Version 2 could be visit here (although due to high user demand, the model may be inaccessible or slow).

news7d

News 7D: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, Sports...at the World everyday world. Hot news, images, video clips that are updated quickly and reliably

Related Articles

Back to top button