ArcticDagger@feddit.dk to Science@mander.xyz · 3 months agoLLMs produce racist output when prompted in African American Englishwww.nature.comexternal-linkmessage-square35fedilinkarrow-up191arrow-down117cross-posted to: science@lemmy.world
arrow-up174arrow-down1external-linkLLMs produce racist output when prompted in African American Englishwww.nature.comArcticDagger@feddit.dk to Science@mander.xyz · 3 months agomessage-square35fedilinkcross-posted to: science@lemmy.world
minus-squareRobotToaster@mander.xyzlinkfedilinkarrow-up11·edit-23 months agoPretty much, it was trained on human writing, then people are all surprised when it has human biases.
minus-squareHamartiogoniclinkfedilinkarrow-up2·3 months agoAn LLM needs to evaluate and modify the preliminary output before actually sending it. In the context of a human mind that’s called thinking before opening your mouth.
minus-squareThe Snark Urge@lemmy.worldlinkfedilinkEnglisharrow-up4·3 months agoWho among us couldn’t benefit from a little more of that?
minus-squareHamartiogoniclinkfedilinkarrow-up1·3 months agoHumans aren’t always very good at that, and LLMs were trained on stuff written by humans, so here we are.
minus-squareThe Snark Urge@lemmy.worldlinkfedilinkEnglisharrow-up2·3 months agoExciting new product from the tech industry: Fruit from the poisoned tree!
Pretty much, it was trained on human writing, then people are all surprised when it has human biases.
An LLM needs to evaluate and modify the preliminary output before actually sending it. In the context of a human mind that’s called thinking before opening your mouth.
Who among us couldn’t benefit from a little more of that?
Humans aren’t always very good at that, and LLMs were trained on stuff written by humans, so here we are.
Exciting new product from the tech industry: Fruit from the poisoned tree!