Scientists used six to 10 seconds of people’s voice, along with basic health data, including age, sex, height, and weight, to create an AI model.

  • plistig@feddit.de
    link
    fedilink
    arrow-up
    0
    ·
    9 months ago

    Even if this wasn’t bullshit, what it is… Why would we need it? It’s not exactly difficult to diagnose diabetes.

    • Chozo@kbin.social
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      9 months ago

      Diagnostic tools such as this could be used to help provide a diagnosis to patients who are in hard-to-reach/remote locations, or who may otherwise be unable to visit a medical professional in-person for other reasons. Depending on the fidelity needed for the tool to make such a diagnosis, it could potentially be done over a simple phone call.

      Assuming that this actually works as claimed, this could be huge for people in remote regions, where they may often have access to basic technologies like phones, but may not have viable transportation or other have conditions preventing them from accessing the help they need.

    • WHYAREWEALLCAPS@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      9 months ago

      Have you seen how much it costs to screen for diabetes? I think you can an A1c screen for like $20, but if you’re strapped for cash, that can be a lot of money. And a blood glucose screen can easily run over $100. And it requires fasting and going to a lab unless your doctor’s office is equipped to run the test. If this works, it reduces all that to just speaking into the microphone, wait a moment, and it spits the results out.

      Just because we can do something easily now does not mean there isn’t room for improvement to make it even easier and possibly cheaper. Especially when you take into consideration how hard it can be to get some patients to follow the rules(i.e. actually fast) and/or follow through.