• The Snark Urge@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    3 months ago

    Although nonstandard English and pidgins often demonstrate the same level of nuance and complexity as standard English, it’s very common for there to be negative stereotypes. One has to wonder whether the LLMs generated from (stolen en masse) written output say as much about us as they do about their creators.

    • RobotToaster@mander.xyz
      link
      fedilink
      arrow-up
      11
      ·
      edit-2
      3 months ago

      Pretty much, it was trained on human writing, then people are all surprised when it has human biases.

      • Hamartiogonic
        link
        fedilink
        arrow-up
        2
        ·
        3 months ago

        An LLM needs to evaluate and modify the preliminary output before actually sending it. In the context of a human mind that’s called thinking before opening your mouth.