Summary

A Danish study found Instagram’s moderation of self-harm content to be “extremely inadequate,” with Meta failing to remove any of 85 harmful posts shared in a private network created for the experiment.

Despite claiming to proactively remove 99% of such content, the platform’s algorithms were shown to promote self-harm networks by connecting users.

Critics, including psychologists and researchers, accuse Meta of prioritizing engagement over safety, with vulnerable teens at risk of severe harm.

The findings suggest potential non-compliance with the EU’s Digital Services Act.

  • ℍ𝕂-𝟞𝟝
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 days ago

    Where did you read that?

    How AI comes into the picture is that Meta claims that they do this via “AI”, which cannot be known how it works, so they can’t be blamed for it being ineffective, and the research group put together a piece of software that did a better job in 3 days, so Meta should either git gud or stop bullshitting about trying their best.

    • jaybone@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      2 days ago

      I thought I read this in the article but I can’t seem to find it now

      Anyway yes, it appears as if they are trying to blame the fact that they use shitty AI that doesn’t do a very good job. That’s a pretty shitty excuse.

      • IamSparticles@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago

        Until some regulatory agency fines them to the degree that it actually hurts their bottom line, they have no incentive to do better. You can’t appeal to the morality of a corporation. As a business, they only respond to numbers.