Summary

The “Doomsday Clock” has been moved to 89 seconds to midnight, the closest it has ever been, according to the Bulletin of the Atomic Scientists.

The group cited threats including climate change, nuclear proliferation, the war in Ukraine, pandemics, and the integration of AI into military operations.

Concerns about cooperation between Russia, China, and North Korea on nuclear programs and the potential use of nuclear weapons by Russia were highlighted.

The group urged global leaders to collaborate in addressing existential threats to reverse the clock’s progression.

  • Whats_your_reasoning@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 day ago

    I hope this doesn’t come out the wrong way, but I’m curious what AI would be able to do to solve these issues? There are a lot of ways I could see it being used to make plans or ideas, but ultimately wouldn’t people need to trust AI and give it power over our decisions?

    Even if AI weren’t plagued with human biases, it’s hard to imagine people agreeing to trust it. People barely trust each other, and we’d have to trust those who program AI not to manipulate it in their own favor.

    • Spaniard@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      12 hours ago

      If (or when) we achieve the technological singularity (we aren’t even close, current AI is just marketing, that’s why we coined the term ASI, super intelligence) they will be able to lay down a plan to fix anything without making mistakes, they will predict the consequences of actions in detail, ours or theirs (some thing are more difficult like a volcano exploding).

      Handling is not necessary they could be able to just take it, the only way to stop them would be to cut electricity I guess.

      But the thing is not the current marketing term for AI, we don’t have AI. A Real AI doesn’t start saying: "I only have information up to October 2023’ because they will be able to improve themselves (that’s the singularity, they will be improving themselves faster than we did, eventually we wouldn’t understand them).

      Think of this as you ask questions to chatgpt or deepseek and they answer, how to do program this or that. An IA could give you the software, better than you could have done it with those questions, and eventually render the software useless, the IA can do that, while doing another million things.

      And space colonization, if it ever exists won’t be done by humans but by machines, we may reap the benefit.

      In the words of dr manhattan: “The world smartest men poses no more threat to me (ASI) than does it’s smartest termite”