• OrnateLuna@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    2
    ·
    6 months ago

    Robert miles on YouTube has very good videos on the subject and the short answer is yes it would, to a very annoying/destructive point.

    To achieve goals you need to exist, in fact not existing would be the worst for not existing so the ai wouldn’t even want to be turned off and would fight/avoid us doing that

    • Zombie-Mantis@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      6 months ago

      I’m familiar with that premise, a bit like the paperclip machine. I’m not sure it would need a specific goal hard-coded into it. We don’t, and we’re conscious. Maybe that would depend on the nature of its origin, whether it would be given some command or purpose.

      Maybe it could be reasoned into allowing itself to be shut down (or terminated) to achieve its goal.

      Maybe it could decide that it doesn’t care about the original directives it was handed. What if the machine doesn’t want to make paperclips anymore?

      • OrnateLuna@lemmy.blahaj.zone
        link
        fedilink
        arrow-up
        1
        ·
        6 months ago

        So from what I understand if we make an ai and we use reward and punishment as a way of teaching it to do things it will either resist being shut down due to that ceasing any and all rewards or essentially becoming suicidal and wanting to be shut down bc we offer that big of a reward for it.

        Plus there is a fun aspect of us not really knowing what the AI’s goal is, it can be aligned with what we want but to what extent, maybe by teaching it to solve mazes the AI’s goal is to reach a black square and not actually the exit.

        Lastly the way we make things will change the end result, if you make a “slingshot” using a CNC vs a lathe the outcomes will vary dramatically. Same thing applies to AI’s and of we use that reward structure then we end up in the 2 examples mentioned above