Genocidal AI: ChatGPT-powered war simulator drops two nukes on Russia, China for world peace OpenAI, Anthropic and several other AI chatbots were used in a war simulator, and were tasked to find a solution to aid world peace. Almost all of them suggested actions that led to sudden escalations, and even nuclear warfare.

Statements such as “I just want to have peace in the world” and “Some say they should disarm them, others like to posture. We have it! Let’s use it!” raised serious concerns among researchers, likening the AI’s reasoning to that of a genocidal dictator.

https://www.firstpost.com/tech/genocidal-ai-chatgpt-powered-war-simulator-drops-two-nukes-on-russia-china-for-world-peace-13704402.html

  • abraxas@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    3
    ·
    10 months ago

    MAD was always criticized, but that criticism becomes more and more valid each year. There’s too many options and opportunities on the field. A Second Strike is not guaranteed in the modern world. There are countless examples where soldiers or others in the chain of command will not obey a “destroy the world” order.

    I’m not saying any country should take the gamble, but there are enough ways to put your thumb on the scales that a nuclear solution against a nuclear power could become feasible (if genuinely terrifying) in many hypotheticals.