• bionicjoey@lemmy.ca
    link
    fedilink
    English
    arrow-up
    36
    ·
    6 months ago

    The Turing Test says that any person could have any conversation with a machine and there’s no chance you could tell it’s a machine. It does not say that one person could have one conversation with a machine and not be able to tell.

    Current text generation models out themselves all the damn time. It can’t actually understand the underlying concepts of words. It just predicts what bit of text would be most convincing to a human based on previous text.

    Playing Go was never the mark of AI, it was the mark of improving game-playing machines. It doesn’t represent “intelligence”, only an ability to predict what should happen next based on a set of training data.

    It’s worth noting that after Lee Se Dol lost to Alphago, researchers found a fairly trivial Go strategy that could reliably beat the machine. It was simply such an easy strategy to counter that none of the games in the training data had included anyone attempting that strategy, so the algorithm didn’t account for how to counter it. Because the computer doesn’t know Go theory, it only knows how to predict what to do next based on the training data.

    • iopq@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      12
      ·
      6 months ago

      Detecting the machine correctly once is not enough. You need to guess correctly most of the time to statistically prove it’s not by chance. It’s possible for some people to do this, but I’ve seen a lot of comments on websites accusing HUMAN answers of being written by AIs.

      If the current chat bots improve to reliably not be detected, would that be intelligence then?

      KataGo just fixed that bug by putting those positions into the training data. The reason it wasn’t in the training data is because the training data at first was just self-play games. When games that are losses for the AI from humans are included, the bug is fixed.

      • petrol_sniff_king@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        8
        ·
        6 months ago

        When games that are losses for the AI from humans are included, the bug is fixed.

        You’re not grasping the fundamental problem here.

        This is like saying a calculator understands math because when you plug in the right functions, you get the right answers.

        • iopq@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          4
          ·
          6 months ago

          The AI grasps the strategic aspects of the game really well. To the point that if you don’t let it “read” deeply into the game tree, but only “guess” moves (that is, only use the policy network) it still plays at a high level (below professional, but strong amateur)

          • petrol_sniff_king@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            7
            arrow-down
            1
            ·
            6 months ago

            How does it “understand the strategic aspects of the game really well” if it can’t solve problems it hasn’t seen the answers to?

            • iopq@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              4
              ·
              6 months ago

              It doesn’t get fed answers in the training data, only positions. If it sees a position, it will eventually learn to solve it by itself