Summary

A Danish study found Instagram’s moderation of self-harm content to be “extremely inadequate,” with Meta failing to remove any of 85 harmful posts shared in a private network created for the experiment.

Despite claiming to proactively remove 99% of such content, the platform’s algorithms were shown to promote self-harm networks by connecting users.

Critics, including psychologists and researchers, accuse Meta of prioritizing engagement over safety, with vulnerable teens at risk of severe harm.

The findings suggest potential non-compliance with the EU’s Digital Services Act.

  • Steve@communick.news
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 days ago

    Not really. Because the law gives them quasi common-carrier protections. So they can’t be held liable by the courts.

    That made sense at the time; When the feed was just a simple reverse chronology of whatever you decided to subscribe to. But now, they actually decide what you see and don’t. The laws need to catch up.