• Gaywallet (they/it)@beehaw.orgOP
    link
    fedilink
    arrow-up
    15
    ·
    edit-2
    1 year ago

    A minor quibble about the original title:

    The title of the article on arstechnica comes from the following quote, a few paragraphs in

    Though few patients appeal coverage denials generally, when UnitedHealth members appeal denials based on nH Predict estimates—through internal appeals processes or through the federal Administrative Law Judge proceedings—over 90 percent of the denials are reversed, the lawsuit claims. This makes it obvious that the algorithm is wrongly denying coverage, it argues.

    While they are correct that error rate applies to the number of misclassified cases (denied when it should not have been), it’s only 90 percent of the denials which are appealed which are overturned. As stated in the quote above, few patients appeal their coverage denials, so it is possible the error rate is much lower as presumably the denials which are not appealed would not be overturned at the same rate.

    • walter_wiggles@lemmy.nz
      link
      fedilink
      arrow-up
      13
      ·
      1 year ago

      Your comment is true but I can’t help think of the Orphan Crushing Machine.

      My analogy: 90% of orphans who ask to not be crushed are released from the machine. Presumably the ones who did not ask would not have been released.

      (The fact that health coverage can be denied at all is the real problem, not that they’re heavy handed about doing it.)

  • 0x815@feddit.de
    link
    fedilink
    arrow-up
    6
    ·
    1 year ago

    the dubious estimates nH Predict spits out seem to be a feature, not a bug

    This is the major problem with algorithms, one of the issue being that they will produce a lot of false positives even if there are best intentions.

    But another major problem is that you can influence the outcome by altering the parameters as the article also says. We have been observing similar issues in health and social policy in many countries over the last years, and the results have always been devastating. And research suggests that biases may increases dramatically in the future if we continue to use these algorithms the way we do it now.

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    🤖 I’m a bot that provides automatic summaries for articles:

    Click here to see the summary

    The investigation’s findings stem from internal documents and communications the outlet obtained, as well as interviews with former employees of NaviHealth, the UnitedHealth subsidiary that developed the AI algorithm called nH Predict.

    The algorithm estimates how much post-acute care a patient on a Medicare Advantage Plan will need after an acute injury, illness, or event, like a fall or a stroke.

    It’s unclear how nH Predict works exactly, but it reportedly estimates post-acute care by pulling information from a database containing medical cases from 6 million patients.

    NaviHealth case managers plug in certain information about a given patient—including age, living situation, and physical functions—and the AI algorithm spits out estimates based on similar patients in the database.

    But Lynch noted to Stat that the algorithm doesn’t account for many relevant factors in a patient’s health and recovery time, including comorbidities and things that occur during stays, like if they develop pneumonia while in the hospital or catch COVID-19 in a nursing home.

    Since UnitedHealth acquired NaviHealth in 2020, former employees told Stat that the company’s focus shifted from patient advocacy to performance metrics and keeping post-acute care as short and lean as possible.


    Saved 71% of original text.

  • Blapoo@lemmy.ml
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    1 year ago

    Saw this coming a mile away

    1 point for the AI Dystopian Future