ChatGPT bombs test on diagnosing kids’ medical cases with 83% error rate | It was bad at recognizing relationships and needs selective training, researchers say.::It was bad at recognizing relationships and needs selective training, researchers say.

  • Cheers@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    3
    ·
    6 months ago

    Because Google’s med palm 2 is a medically trained chatbot that performs better than most med students, and some med professionals. Further training and refinement using new chatbot findings like mixture of experts and chain of thought are likely to improve results.

    • Darorad@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      6 months ago

      Exactly, med-palm 2 was specifically trained for being a medical chatbot, not general purpose like chatgpt

      • Hotzilla
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 months ago

        Train with the internet, get results like it is in Internet. Are medical content in Internet good? No, it is shit, so it will give shit results.

        These are great base models, understanding larger context is always better for LLM, but specialization is needed for these kind of contexts.