Since it’s been a controversy on here a couple times, here’s a great example of how you can demonstrate an LLM can produce something it can’t have seen before.

It doesn’t prove anything beyond doubt, but I think these kinds of experiments show to something like a civil law standard that they’re not merely parrots.

  • survivalmachine@beehaw.org
    link
    fedilink
    English
    arrow-up
    4
    ·
    9 months ago

    I think we can all agree that LLMs are just simple machines responding to stimuli and producing what we want to hear, but we’re apparently not ready for the conversation that complex machines made out of proteins are any different?

    • CanadaPlus@futurology.todayOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      9 months ago

      The human exceptionalism runs strong. At best, you can argue that this specific machine isn’t thinking. Once you say no machine could do it, you’re forgetting that you too are stardust.

      I get accused of falling to the ELIZA effect, but I suspect some of the critics are driven by something like hubris.

  • Ben Matthews
    link
    fedilink
    English
    arrow-up
    2
    ·
    9 months ago

    They may indeed develop linguistic skills at deeper levels, but LLMs are still only playing with words. Imagine a kid who grew up confined in a library with unlimited books, but no experience of the real world outside, no experiments with moving about, bouncing balls, eating, smelling, seeing, hearing, interacting with others, only reading - might write eloquently but have no ‘common sense’ of reality. To train a real AI with physical sense and capabilities would be like bringing up a kid - messy, not easy to automate, takes a long time.

    • Uranium3006@kbin.social
      link
      fedilink
      arrow-up
      2
      ·
      9 months ago

      we’re still early in development, we just got surprised by how good statistical language modeling alone is

    • CanadaPlus@futurology.todayOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      9 months ago

      I mean, I’ve never seen a toucan, or even been close to one to my knowledge, but I feel like I understand them pretty well, at least at a basic level. I’ve also never known any bird except through smell, sound, touch, sight and so on.

      They do still suck at physical tasks, and probably will until we find a totally different approach to the problem than brute gradient decent, but I’m not so sure that makes everything else they (appear to) know useless. I’m not convinced kinetic knowledge is the only kind.