• Death_Equity@lemmy.world
    link
    fedilink
    English
    arrow-up
    71
    arrow-down
    1
    ·
    1 year ago

    Wait, has anybody bothered to ask AI how to fix itself? How much Avocado testing does it do? Can AI pull itself up by its own boot partition, or does it expect the administrator to just give it everything?

      • FaceDeer@kbin.social
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        1 year ago

        OP might have been intending it as a joke, but self-improvement is a very real subject of AI research so if that’s the case he accidentally said something about a serious topic.

        • Buddahriffic@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          It’s an essential part of the idea of the technological singularity. An AI iterates itself and the systems it runs on, becoming more efficient, powerful, and effective at a rate that makes all of human progress up to that point look like nothing.

          • MajorHavoc@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            1 year ago

            While I’m inclined to believe the singularity is achievable, it’s important to remember that there’s no evidence today that it will ever be reached.

            Our hope for it, and the good than can come with it, can’t pull it into the realm of things we will see in our lifetimes. It could emerge soon, but it’s at least as likely to stay science fiction for another millennia.

    • vrighter@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      9
      ·
      1 year ago

      if we don’t know, it doesn’t know.

      If we know, but there’s no public text about it, it doesn’t know either.

      it is trained off of stuff that has already been written, and trained to emulate the statistical properties of those words. It cannot and will not tell us anything new

      • FaceDeer@kbin.social
        link
        fedilink
        arrow-up
        15
        arrow-down
        1
        ·
        1 year ago

        That’s not true. These models aren’t just regurgitating text that they were trained on. They learn the patterns and concepts in that text, and they’re able to use those to infer things that weren’t explicitly present in the training data.

        I read recently about some researchers who were experimenting with ChatGPT’s ability to do basic arithmetic. It’s not great at it, but it’s definitely figured out some techniques that allow it to answer math problems that were not in its training set. It gets them wrong sometimes, but it’s like a human doing math in its head rather than a calculator using rigorous algorithms so that’s to be expected.

        • vrighter@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          1 year ago

          they learn statistical correlations between words. given the last 5000 (or however large the context is) words, and absolutely no other information besides that, what is the most likely word to appear next? It’s a glorified order 5000 markov chain.

          The reason it can “do” some math is that there are tons of examples in the training set using small numbers usually used as examples. it can do basic arithmetic because it has seen “2+2=4” and other examples with simple numbers like that. The studies used test basic arithmetic. The same things that it had millions of pre-worked examples of. And it still gets those wrong, with astonishing frequency. those studies aren’t talking about asking it “what is the square root of pi” or stuff like that. but stuff such as “is 7 greater than 4?”, “what is 10 + 3?”, “is 97 prime?” stuff it has most definitely seen the answers to. ask it about some large prime, and it’ll nay no, and be probably right, because most numbers are composite

          • FaceDeer@kbin.social
            link
            fedilink
            arrow-up
            3
            ·
            edit-2
            1 year ago

            those studies aren’t talking about asking it “what is the square root of pi” or stuff like that. but stuff such as “is 7 greater than 4?”, “what is 10 + 3?”, “is 97 prime?” stuff it has most definitely seen the answers to.

            No, they very explicitly checked to see whether the training set contains the literal math problem that they asked it for the answer to. ChatGPT is able to answer math questions that it has never seen before. I believe this is the article (though I had to go searching, it’s been a while).

            When people dismiss LLMs as “just prediction engines” they’re really missing the point. Of course they’re prediction engines, that’s not in dispute. The question is about how they go about making those predictions. When I show you the string “18 + 10 =” you can predict what comes next, yes? Well, how did you predict it? Did you memorize that particular specific string, or have you developed heuristics for how to do simple addition problems when you see them?

            • MajorHavoc@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              edit-2
              1 year ago

              These things are currently infamously bad at math, though.

              I won’t argue that it’ll never get there. I’m confident it will, - though with a lot more perl hacks than elegant emergence.

              But today, these things have an astonishingly high ‘appearance of intelligence’ to ‘incredible stupidity’ ratio.

              • FaceDeer@kbin.social
                link
                fedilink
                arrow-up
                3
                ·
                1 year ago

                Humans are also not particularly well known for their math skills. Ask a random stranger to do simple arithmetic in their head, with only a few seconds to think and no outside help, and I wouldn’t expect particularly reliable results.

                • vrighter@discuss.tchncs.de
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  1 year ago

                  however, people are not notoriously bad at the types of basic arithmetic they test for. every time I pay something with cash, I work out how much change I’m owed mentally, and so does the seller. I can count on one hand the number of times I’ve actually been given incorrect change throughout my entire lifetime. And when I did get wrong change, it was usually “oh, I thought you gave me €10 ínstead of €20”. Meaning that they actually still did the math correctly.

                  No sane person will ever tell you 4 is bigger than 7. Yet llms sometimes get even this type of question wrong. They learn patterns, but not concepts. This is even simpler than basic arithmetic.

          • MajorHavoc@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            edit-2
            1 year ago

            It gives me the giggles that folks speculating are getting more upvotes than your post that demonstrates actual understanding of the implementation details.

            If I were the type to sell sizzle hype and snake oil, now would be the time to do it. The venture capitalists may have learned their lesson, but the general public haven’t.

    • discodoubloon@kbin.social
      link
      fedilink
      arrow-up
      10
      arrow-down
      4
      ·
      1 year ago

      It doesn’t “know” anything. It can’t solve that problem. It’s trained on humans so it’s limited to what we have written down.

      I love ChatGPT but if it’s creative it’s because you asked it the right questions and found an oblique answer yourself.

    • Buffaloaf@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      I asked Bard how ChatGPT can fix itself, here’s what it said:

      ChatGPT can fix itself in a number of ways, including:

      Retraining on a larger dataset. ChatGPT is trained on a massive dataset of text and code, but it can always be retrained on a larger dataset. This would help ChatGPT to learn new things and improve its performance.

      Using a different algorithm. ChatGPT is currently trained using the GPT-3 algorithm, but there are other algorithms that could be used. For example, ChatGPT could be trained using the Megatron-Turing NLG algorithm, which is one of the most powerful language models in the world.

      Using a combination of algorithms. ChatGPT could also be trained using a combination of algorithms. This would allow ChatGPT to benefit from the strengths of different algorithms.

      Using a feedback loop. ChatGPT could be used to generate text, and then this text could be used to train ChatGPT. This would allow ChatGPT to learn from its own mistakes and improve its performance over time.

      Using human feedback. ChatGPT could also be used to generate text, and then this text could be reviewed by humans. The human feedback could then be used to improve ChatGPT’s performance.

      • lagomorphlecture@lemm.ee
        link
        fedilink
        English
        arrow-up
        9
        ·
        1 year ago

        “Using a feedback loop. ChatGPT could be used to generate text, and then this text could be used to train ChatGPT. This would allow ChatGPT to learn from its own mistakes and improve its performance over time.”

        So basically create its own Fox News and see how that goes.

        • FaceDeer@kbin.social
          link
          fedilink
          arrow-up
          3
          ·
          1 year ago

          The full suggestion includes “This would allow ChatGPT to learn from its own mistakes”, which implies that the text it generated would be evaluated and curated before being sent back into it for training. That, as well as including non-AI-generated text along with the AI generated stuff, should stop model collapse.

          Model collapse is basically inbreeding, with similar causes and similar solutions. A little inbreeding is not inherently bad, indeed it’s used frequently when you’re trying to breed an organism to have specific desirable characteristics.

      • FaceDeer@kbin.social
        link
        fedilink
        arrow-up
        4
        ·
        1 year ago

        If having an AI tell researchers that they should base its next iteration off of Megatron isn’t the plot of a Michael Bay Transformers movie already, it should have been.