BlushedPotatoPlayers to Technology@lemmy.worldEnglish · 10 months agoAI chatbots tend to choose violence and nuclear strikes in wargameswww.newscientist.comexternal-linkmessage-square50fedilinkarrow-up1219arrow-down125file-textcross-posted to: becomeme@sh.itjust.worksfuturology@futurology.todaynottheonion@lemmy.worldartificial_intel@lemmy.ml
arrow-up1194arrow-down1external-linkAI chatbots tend to choose violence and nuclear strikes in wargameswww.newscientist.comBlushedPotatoPlayers to Technology@lemmy.worldEnglish · 10 months agomessage-square50fedilinkfile-textcross-posted to: becomeme@sh.itjust.worksfuturology@futurology.todaynottheonion@lemmy.worldartificial_intel@lemmy.ml
minus-squarekibiz0r@midwest.sociallinkfedilinkEnglisharrow-up5·10 months agoFor AGI, sure, those kinds of game theory explanations are plausible. But an LLM (or any other kind of statistical model) isn’t extracting concepts, forming propositions, and estimating values. It never gets beyond the realm of tokens.
For AGI, sure, those kinds of game theory explanations are plausible. But an LLM (or any other kind of statistical model) isn’t extracting concepts, forming propositions, and estimating values. It never gets beyond the realm of tokens.