AI is increasingly a feature of everyday life. But with its models based on often outdated data and the field still dominated by male researchers, as its influence on society grows it is also perpetuating sexist stereotypes.

  • webghost0101
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    13 hours ago

    The image generation portion of this is not the biggest long term problem because there genuinely very dumb. Good training data can mediate this a lot but more importantly.

    Image generation does not reason like llms can.

    Once the tech is properly matured where fine tuning of details is possible i expect a true llm reasoning component build in that will always specify to the image generation module exactly how the intended image is supposed to look. Including gender and age if those where not user specified.

    This does not solve the problem of bias in llm but i want to highlight that the bias in llm-reasoning module of ai is the single most important part that needs to be bias aware, image generation will smooth itself out.

    This is somewhat a reactionary rant on “researchers” and people addressing image gen and text gen under the same rules. And as a worst offender judge gpt models based on the output off dalle outputs. However faulty and hallucinatory they all are they are not the same thing.

    Thanks for reading.