Stability AI crashed and burned so fast it’s not even funny. Their talent is abandoning ship they’ve even been caught scraping images from Midjourney, which means they probably don’t have a proper dataset.
The model should be capable of much better than this, but they spent a long time censoring the model before release and this is what we got. It straight up forgot most human anatomy.
There’s a reason that artists in training often practice by drawing nudes, even if they don’t intend for that to be the main subject of their art. If you don’t know what’s going on under the clothing you’re going to have a hard time drawing humans in general.
they have plenty of porn created using the AI lol
This article is about the newest model, SD3 medium (2B). Previous Models such as SD2 and SDXL were also mostly unable to generate nudity, though they managed beach or summer images. The earliest SD1.5 is most capable of nudity, especially with the copious fine tunes focused on that. SD3 though completely freaks out as soon as it starts generating skin. Its straight up weird. Only winter images with full head to toe clothing produce humans at all. Its currently a landscape generator. Even realistic animals are hard for it. Whatever it successfully generates looks quite nice though. Pretty background wallpapers.
wtf, they are selling something worse than the last?
This sucks. I was really holding out hope that they might chart a better path forward than most of the alternatives.
Honestly I think that it’s models like these that output things that could be called art.
Whenever a model is actually good, it just creates pretty pictures that would have otherwise been painted by a human, whereas this actually creates something unique and novel. Just like real art almost always ilicits some kind of emotion, so too do the products of models like these and I think that that’s much more interesting that having another generic AI postcard.
Not that I’m happy to see how much SD has fallen though.
It would be great if the model could produce this beautifully disfigured stuff when the user asked it to. But if it can’t follow the user’s prompts reasonably, then it’s pretty useless as a tool
I can see an argument for artists choosing to use chaotic processes they can’t really control.
Setting up a canvas and paints and brushes in a particular arrangement in the woods, and letting migratory animals and weather put their mark on the work, and then see what results. That could be art.
And if that can be art, then I guess chaotic, unpredictable AI models can output something that can be art, too.
I agree, bring on the weird, I don’t need accurate, I want hallucinated novelty. This is like people who treat LLM like a dictionary or search engine and complain about innaccuracy. They don’t understand this is to be expected of a synthetized answer.
Hallucinations is an essential part of the value these things bring.
whereas this actually creates something unique and novel.
🤦
Say the phrase, go on, stochastic parrots !
Ai would do it better than me.
Say the phrase, go on, stochastic parrots !
Ai would do it better than me.
true, true, chatgpt
Almost like the issues with repressing sex and nudity are harming the development of intelligence. Just like real life.
I was going to say this, their new architecture seems to be better than previous ones, they have more compute and I’m guessing, more data. The only explanation for this downgrade is that they tried to ban porn. I haven’t read online info about this at the time anyways, I’m just learning this recently
I see this growing sentiment. Are we on the cusp of a re-examination of this social wound.
Wow, the pile of limbs in the living room pic genuinely ceeeped me out.
“Biblically accurate models”
Ah, yes. Man made horrors beyond my comprehension.
? They are all bad at first for the average person that uses surface level tools, but SD3 won’t have the community to tune it because it is proprietary junk and irrelevant now.
Would you mind sharing some good alternatives that aren’t proprietary junk?
I believe pixart sigma is more open. The community hasn’t rallied around it though.
Edit: Fuck yes, pixart is AGPL!
Now that everyone’s no longer waiting in anticipation of SD3 perhaps we’ll start seeing diversification of attention to other models.
In my experience these open models is where the real work is being done. The large supervised models like DALL-E etc are more flashy but there’s a lot more going on behind the scenes than the model itself so it feels like it’s hard to gauge the real progress being done
There are a lot of fine-tunes of earlier Stable Diffusion models (SD1.5 and SDXL) that are better than this, and will continue to see refinement for some time yet to come. Those were released with more permissive licenses so they’ve seen a lot of community work built on them.
CommonCanvas, the CC only dataset model
SD3 is planned to be open release later still though?
No. I don’t think so. The lead researcher left because of it.
I’m not seeing about the lead researcher leaving because of that, just they are leaving. With the expenses far exceeding revenue right now being a suspected reason.
SD3 won’t have the community to tune it because it is proprietary junk and irrelevant now.
What changed between SDXL and SD3? I’m out of the loop on this one.
They realized that no matter how much they charged as a one time fee, the people the got the one time fee enterprise license would eventually cost them more in computational costs them the fee. So they switched it to 6000 image generations, which wasn’t enough for most of the community that made fixes and trained loras, so none of the “cool” community stuff will work with SD3.
Have they considered a community sponsored “group buy” of compute, to just train the model as far as the community will bear ? SDXL was so great, surely 100k people could put 5$ a month toward making monthly improvement open source checkpoints happen ? I don’t see any other financing model work out if the output is open source. It simply can’t be financed after publication. And it won’t get the community support if it’s behind a paywall.
Maybe I’m out of the loop, but I was under the impression people paying for the enterprise tier were largely using the model on their own hardware, and that the removal of this tier was largely just rent seeking by SD against people improving on their model and selling access to a better version.
Did SD really sell unlimited access to their compute/ image generator for a fixed price? If so that’s just so dumb it’s hard to believe. I only started paying attention to the company recently though, so maybe I’m missing something.
this is gonna lead to some weird fetishes
Such results may not be very useful for most people, but that’s dope in an accidentally artistic way.
Basically, any time a user prompt homes in on a concept that isn’t represented well in the AI model’s training dataset, the image-synthesis model will confabulate its best interpretation of what the user is asking for.
I’m so happy that the correct terminology is finally starting to take off in replacing ‘hallucinate.’
Also from reddit, with zero irony:
Kudos to Stablility AI for releasing ANOTHER excellent model for FREE.
💀
The model does have a lot of advantages over sdxl with the right prompting, but it seems to fall apart in prompts with more complex anatomy. Hopefully the community can fix it up once we have working trainers.
“Laying on grass” is complex?
Gotta think most sfw pictures of people are portraits. Poses are more advanced for sure
holy yikes! call Cronenberg!
AI has already peaked. It’s all downhill from here.
Does your crystal ball tell you that? How would anyone know how new technology develops?