Generally, it seems like AI experts are divided about how close we are to developing an AGI, and how close any of this might take us to an extinction level event. On the whole, they seem less likely to think that AI will kill us all. Maybe.

  • Spzi@lemm.ee
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    As a kid I have very often dreamed to have a robot friend

    Yes, I can relate.

    “Variations of these AIs may soon develop a conception of self as persisting through time, reflect on desires, and socially interact and form relationships with humans.” -Nick Bostrom

    I simply can’t see how that’s a bad thing.

    It can be if their goals are not aligned with ours. We’re essentially creating an alien species. We try to make it align well, but that’s a very difficult problem to solve. We have not found a solution yet and we don’t know wether it is possible. https://en.wikipedia.org/wiki/AI_alignment#Existential_risk

    it will in no way make disaster.

    No one knows the future. Please just note that many experts disagree. Others agree.

    it will instead improve our lives drastically.

    Yes, if we solve the alignment problem and control problem.

    What gain would bring to build something to willingly cause harm to self and others?

    The current economic incentives reward those who create the next powerful AI the fastest. Making it safe costs money and time, so there is an incentive to risk it. Current practice is to release things without fully understanding the implications. Sometimes, emergent capabilities are discovered weeks and months after their release. This is somewhat fine as long as the models are somewhat harmless. It could easily spell disaster once we cross a line which we might not clearly see before we cross it.