I have many conversations with people about Large Language Models like ChatGPT and Copilot. The idea that “it makes convincing sentences, but it doesn’t know what it’s talking about” is a difficult concept to convey or wrap your head around. Because the sentences are so convincing.

Any good examples on how to explain this in simple terms?

Edit:some good answers already! I find especially that the emotional barrier is difficult to break. If an AI says something malicious, our brain immediatly jumps to “it has intent”. How can we explain this away?

  • trashgirlfriend@lemmy.world
    link
    fedilink
    arrow-up
    10
    arrow-down
    1
    ·
    7 months ago

    I feel like you’re already not getting it and therefore giving too much credit to the LLM.

    With LLMs it’s not even about second hand knowledge, the concept of knowledge does not apply to LLMs at all, it’s literally just about statistics, eg. what is the most likely next output after this token.

    • kaffiene@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 months ago

      You could argue that embeddings constitute some kind of stored knowledge. But I do agree with your larger point, LLMs are getting to much credit because of the language we use to describe them