I have many conversations with people about Large Language Models like ChatGPT and Copilot. The idea that “it makes convincing sentences, but it doesn’t know what it’s talking about” is a difficult concept to convey or wrap your head around. Because the sentences are so convincing.

Any good examples on how to explain this in simple terms?

Edit:some good answers already! I find especially that the emotional barrier is difficult to break. If an AI says something malicious, our brain immediatly jumps to “it has intent”. How can we explain this away?

  • Hamartiogonic
    link
    fedilink
    arrow-up
    6
    arrow-down
    1
    ·
    edit-2
    7 months ago

    All of this also touches upon an interesting topic. What it really means to understand something? Just because you know stuff and may even be able to apply it in flexible ways, does that count as understanding? I’m not a philosopher, so I don’t even know how to approach something like this.

    Anyway, I think the main difference is the lack of personal experience about the real world. With LLMs, it’s all second hand knowledge. A human could memorize facts like how water circulates between rivers, lakes and clouds, and all of that information would be linked to personal experiences, which would shape the answer in many ways. An LLM doesn’t have such experiences.

    Another thing would be reflecting on your experiences and knowledge. LLMs do none of that. They just speak whatever “pops in their mind”, whereas humans usually think before speaking… Well at least we are capable of doing that even though we may not always take advantage of this super power. Although, the output of an LLM can be monitored and abruptly deleted as soon as it crosses some line. It’s sort of like mimicking the thought processes you have inside your head before opening your mouth.

    Example: Explain what it feels like to have an MRI taken of your head. If you haven’t actually experienced that yourself, you’ll have to rely on second hand information. In that case, the explanation will probably be a bit flimsy. Imagine you also read all the books, blog posts and and reddit comments about it, and you’re able to reconstruct a fancy explanation regardless.

    This lack of experience may hurt the explanation a bit, but an LLM doesn’t have any experiences of anything in the real world. It has only second hand descriptions of all those experiences, and that will severely hurt all explanations and reasoning.

    • trashgirlfriend@lemmy.world
      link
      fedilink
      arrow-up
      10
      arrow-down
      1
      ·
      7 months ago

      I feel like you’re already not getting it and therefore giving too much credit to the LLM.

      With LLMs it’s not even about second hand knowledge, the concept of knowledge does not apply to LLMs at all, it’s literally just about statistics, eg. what is the most likely next output after this token.

      • kaffiene@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 months ago

        You could argue that embeddings constitute some kind of stored knowledge. But I do agree with your larger point, LLMs are getting to much credit because of the language we use to describe them