alb_004@lemm.ee to Technology@lemmy.worldEnglish · 2 months agoChatGPT provides false information about people, and OpenAI can’t correct itnoyb.euexternal-linkmessage-square60fedilinkarrow-up1206arrow-down111cross-posted to: technology@lemmy.worldaicompanions@lemmy.worldfuck_ai@lemmy.worldnews@lemmy.world
arrow-up1195arrow-down1external-linkChatGPT provides false information about people, and OpenAI can’t correct itnoyb.eualb_004@lemm.ee to Technology@lemmy.worldEnglish · 2 months agomessage-square60fedilinkcross-posted to: technology@lemmy.worldaicompanions@lemmy.worldfuck_ai@lemmy.worldnews@lemmy.world
minus-squaremaynarkh@feddit.nllinkfedilinkEnglisharrow-up1·2 months agoIf it can name what the most likely combination is, couldn’t it also know how likely that combination of words is?
minus-squarekent_eh@lemmy.calinkfedilinkEnglisharrow-up3·2 months agoIf it has been trained using questionable sources, or if it’s training data includes sarcastic responses (without understanding that context), it isn’t hard to imagine how confidently wrong some of the responses could be.
minus-squarewahming@monyet.cclinkfedilinkEnglisharrow-up3·2 months agoNo, because that requires it to understand the words. It doesn’t.
If it can name what the most likely combination is, couldn’t it also know how likely that combination of words is?
If it has been trained using questionable sources, or if it’s training data includes sarcastic responses (without understanding that context), it isn’t hard to imagine how confidently wrong some of the responses could be.
No, because that requires it to understand the words. It doesn’t.