• Petter1@lemm.ee
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    5 months ago

    I have already learned a lot from the human knowledge LLM was trained on (and yes i know about halus and of course I fact check everything) but learning coding using a LLM teacher fucking rocks

    Thanks to copilot, I “understand” linux kernel modules and what is needed to backport, for example.

    • MentalEdge
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 months ago

      Of course, the training data contains all that information, and the LLM is able to explain it in a thousand different ways until anyone can understand it.

      But flip that around.

      You could never explain a brand new concept to an LLM which isn’t already contained somewhere in its training data. You can’t just give it a book about a new thing, or have a conversation about it, and then have it understand it.

      A single book isn’t enough. It needs terabytes of redundant examples and centuries of cpu-time to model the relevant concepts.

      Where a human can read a single physics book, and then write part 2 that re-explains and perhaps explores new extrapolated phenomenon, an LLM cannot.

      Write a completely new OS that works in a completely new way, and there is no way you could ever get an LLM to understand it by just talking to it. To train it, you’d need to produce those several terabytes of training data about it, first.

      And once you do, how do you know it isn’t just pseudo-plagiarizing the contents of that training data?

      • Petter1@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        4 months ago

        Well, the issue is that LLMs do not support real time learning at all. If they would be able real time learn and use the base data from training data, I suppose they can understand a physic book even better than normal human with reading it once.

        A Human without pre training is not able to understand a physic book without help. He would even be able to read.

        If someone finds a way to train LLM realtime and have it decide with what weight each new training data is to be interpreted, I see all that above possible.

        And of course if humanity ever creates something that behaves like AGI, humanity would not be able to tell if it is emulated AGI or real AGI. There is no known method to differentiate those two.

        • MentalEdge
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          4 months ago

          You have no fucking idea what you’re talking about. This isn’t even a discussion, you’re presenting your personal made-up fantasies as if they’re real possibilites and ignoring anyone who points that out.

          Shut the fuck up and go learn how LLMs work. I’m too fucking tired of explaining how completely delusional you are.