As AI capabilities advance in complex medical scenarios that doctors face on a daily basis, the technology remains controversial in medical communities.

  • Ranvier
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    3
    ·
    edit-2
    1 year ago

    A few things, that’s an abysmal rate when it comes to people’s health. A doctor with that success rate would be sued into next century. The rate dropped further when it came to differential diagnosis, implying chat gpt was leaving out important rarer possibilities. Often doctors work by starting with the most common and narrow down from there after repeated rounds of testing if it ends up being something uncommon, but one of their primary jobs is also thinking about rarer dangerous stuff that can mimic more common things and must be ruled out immediately.

    Most importantly, the information fed into this was optimized with accurate descriptive medical terminology. This is a language that, in general, patients do not speak. People can also describe things very differently, for instance a patient saying something is weak when a doctor may say no that’s numb not weak or visa versa. And dizzy could mean just about anything. Someone typing their own story directly into chat gpt is going to get much worse results than this without someone to interpret the word choices and ask the right questions that people may not even realize are important.

    Anyways, the possibilities of AI use in Healthcare is interesting, but disappointing it does worse the less common things get and is bad at a differential diagnosis, the areas that would be really helpful as an aid to diagnosis. Some other areas to think about though could be maybe as a front end to find clinical trials with the us gov database, which can be hard to browse, or maybe streamlining the endless insurance paperwork. I’d be surprised insurance companies don’t use something similar already.

    • pezhore@lemmy.ml
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      2
      ·
      1 year ago

      Don’t forget the inherent biases that are introduced with AI training! Women especially have a history of having their symptoms dismissed out of hand - if the LLM training data includes these biases, in combination with the bad diagnosis women could be really screwed.

      • inspxtr@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        similarly to people from different races/countries … it’s not only that their conditions might vary and require more data, it is also that some communities don’t visit/trust hospitals to even have their data collected to be in the training set. Or they can’t afford to visit.

        Sometimes, people from more vulnerable communities (eg LGBT) might prefer not to have such data collected in the first place, making data sparser.

    • Potatos_are_not_friends@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      2
      ·
      1 year ago

      . The rate dropped further when it came to differential diagnosis, implying chat gpt was leaving out important rarer possibilities.

      Every decade, new diagnoses and discoveries are made. Many manmade or climate change related. We never had microplastic poison before. Or random chemicals added into our foods because it gives a company that +1% profit.

      In other words, we are finding new ways to destroy our bodies!

      And with AI is always working with historical data, it’ll be a long time before we only use a AI Doctor.

    • errer@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      1 year ago

      Anecdotally my doctor is prolly only right 75% of the time too. But I’ll go back to him later with more information about the ailment (or more tests) and he’ll eventually get it right. Medical diagnoses in general are not terribly accurate.