Scientists used six to 10 seconds of people’s voice, along with basic health data, including age, sex, height, and weight, to create an AI model.

  • swope@kbin.social
    link
    fedilink
    arrow-up
    4
    ·
    edit-2
    9 months ago

    I didn’t read it, but my first thought was they trained it to associate speech patterns with wealth and/or education, which correlates with diabetes for all the usually reasons in the US health non-care system.

    Edit: I’m probably wrong:

    The scientists analysed more than 18,000 recordings and 14 acoustic features for differences between those who had diabetes and those who did not.

    They looked at a number of vocal features, like changes in pitch and intensity that cannot be perceived by the human ear.

    • dmention7@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      9 months ago

      You’re probably not too far off, considering that “basic health data” already is a pretty decent screen for type 2 diabetes. It sure feels like a regurgitation of the whole predicting socioeconomic status thing, which correlates strongly with all kinds of health issues. So until they can show some kind of physiology being detected, this just feels like an ad for the billionth “AI solves everything” startup that will be spun off in a few months.