• Cort@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 months ago

      Is it explaining its reasoning or fabricating a plausible justification for the outputs? We’ll never know

    • webghost0101
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      edit-2
      8 months ago

      Depends a bit on perspective and nuance. Gpt4 pretty much always returns text relevant to the prompt, the neural net sees A and knows B comes next. thats a form of understanding. Not understanding would be like its incapable of seeing A and outputting something irrelevant.

      For reasoning, which i believe is actually “step by step” logic it needs a good handholding Prompt but then it can consistently create grade school level of solutions to logical problems.

      Neither is what humans would call true understanding and true reasoning but its way to early judge ai by human standards.