• webghost0101
    link
    fedilink
    arrow-up
    5
    ·
    edit-2
    10 months ago

    While i agree there is a big issue with the bad biased and sexist training data this entire article is about the lensa app which uses (i assume) the default stable diffusion model laion-5b.

    Intentional creating sexualized pictures is banned in their guidelines. And yet no one thought of creating a good negative prompt that negates any kind of nudity or eroticism? It still doesn’t properly fix the training data but at least people aren’t unwillingly presented porn of their own images.

    Also everyone can create a dataset and build a stable diffusion model, so why is lensa relying on the default model which is more like a quick and dirty tech demo. They had all the tools to do this right but decided to not even uses the easy lazy ones.