I’m talking about this sort of thing. Like clearly I wouldn’t want someone to see that on my phone in the office or when I’m sat on a bus.

However there seems be a lot of these that aren’t filtered out by nsfw settings, when a similar picture of a woman would be, so it seems this is a deliberate feature I might not be understanding.

Discuss.

  • peanuts4life@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    55
    arrow-down
    2
    ·
    edit-2
    4 months ago

    I feel like the Internet needs more tags:

    • Explicit (rude language, nudity, etc)
    • Porn (nsfw legacy tag)
    • Violence
    • Not safe for life

    Something like that.

    • MentalEdge
      link
      fedilink
      arrow-up
      15
      ·
      4 months ago

      These aren’t even enough.

      The tag for this particular problem would be something like “mildly suggestive” because it’s literally just skin that some people don’t want to see.

      • peanuts4life@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        12
        arrow-down
        1
        ·
        4 months ago

        Yeah, I agree. I do sort of understand op’s consternation. I don’t browse Lemmy on my work PC, but sometimes on lunch or in public I pull it up on my phone on All communities and I’m suddenly conscious that everyone beside me can see the “sfw” furry and anime art that I scroll past.

        However, that’s kinda my fault. I don’t want to ban those communities because I like that stuff. It’s just a little odd that we call it sfw when, to be honest, I have a hard time picturing most work places where I live happy to see that on my desktop.

      • peanuts4life@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        1
        ·
        4 months ago

        Yeah, that would be great. Many instance admins already use CSAM classifier models on all incoming images. It’d be great if they could add additional models that could put meta tags on images automatically like “suggestive” and “gore” with the option for the poster to modify the tags just in case it was a false negative or positive. Like a lasagna getting gore, for example.

    • ShittyBeatlesFCPres@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      ·
      4 months ago

      I wonder if Lemmy could easily do content warnings like on Mastodon. I don’t know if it’s part of the ActivityPub spec but it’s definitely a thing that’s been implemented elsewhere.

      • Aedis@lemmy.world
        link
        fedilink
        arrow-up
        10
        ·
        4 months ago

        The answer to “is it part of the activityPub spec?” is more often than not a strong No.