To be clear I’m not expert. But I know a bit.

The way LLMs (like ChatGPT, GPT-4, etc) work, is that they continuously decide what the next best-sounding word might be, and they print it, over and over and over, until it makes sentences and paragraphs. And the way that next-word decision works under the hood, is with a deep neural net that was initially a theoretical tool designed to imitate the neural circuits that make up our biological nervous system and brain. The actual code for LLMs is rather small, it’s just about storing and managing representations of a neuron, and rearranging the connections between neurons as it learns more; just like the brain does.

I was listening to the first part of this “This American Life” episode this morning that covers it really well: https://podcasts.apple.com/us/podcast/this-american-life/id201671138?i=1000618286089 In it, Microsoft AI experts also express excitement and confusion about how GPT-4 seems to actually reason about things, rather than just bullshitting the next word to make it look like it reasons, like it’s supposed to be designed to do.

And so I was thinking: the reason why it works might be the other way around. It’s not that LLMs are smart enough to reason instead of bullshit, it’s that human’s reasoning actually works out of constantly bullshitting too, one word at a time. Imitate the human brain exactly, and I guess we shouldn’t be surprised that we land with a familiar-looking kind of intelligence - or lack thereof. Right?

  • Margot Robbie@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    Please. If you’ve actually used it, you would know that ChatGPT writing won’t even pass high school English. It does not understand subtext or humor AT ALL.

    It’s a tool, a very powerful tool, but a tool nonetheless.

    • ritswd@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 year ago

      GPT-4, not ChatGPT. The episode I linked above mentioned that those researchers were unimpressed with LLMs actually reasoning because they had only studied ChatGPT; until GPT-4 was available for them to study, which did freakishly much better at reasoning tests, and they don’t know why. Don’t take it from me, take it from the experts!

      (I’m not an expert, but I work in a nearby domain. Yes, I have used quite a few LLMs. I agree that ChatGPT is underwhelming.)