The policy changes come after an NBC News investigation last month into child safety on the platform.

    • okawari@kbin.social
      link
      fedilink
      arrow-up
      25
      ·
      edit-2
      1 year ago

      Sadly, I’m sure any social platform where one can make their own private community (actually private or perceived to be private) will have more of these than most of us think. Its just that we don’t see them.

      I’m also not surprised that services like discord is seemingly relaxed at moderating them, as its a problem that is invisible to most users. Moderating is expensive, and unless it hurts public opinion, seemingly its not worth it for them

      • Itty53@kbin.social
        link
        fedilink
        arrow-up
        3
        ·
        edit-2
        1 year ago

        This is the unfortunate and altogether wrong (morally, it’s not incorrect) truth of the corporate structure. If you can’t point to an ROI they don’t want to spend money on it. The tech world has gone to great collective lengths to push the value of prevention and best practices even if they take a bit of time and money, and I really wish they’d take up that same passion for this kind of shit.

        I also really wish the public would treat these companies as the conspirators they are too, but I know they won’t. They never have.

        The gross truth is there’s a not-inconsequential number of pedophiles in the tech world compared to other industries. It’s the kind of world where you’re not likely to stumble into it online unless you’re well versed in the technology we all collectively call “the internet”. Speaking personally I’ve encountered it myself multiple times, technologically literate people who think they can evade or even worse, relate to others in that industry. Might be an unpopular opinion but I’m in that industry too, I’m not talking about just your industry, reader.

    • Thorny_Thicket
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      That’s more of a philosophical question and I’m curious to hear why you think that way?

      It’s disturbing sure but so is scat porn but as long as no one is being harmed or forced to do something against their will I don’t really see the problem.

      If watching AI generated stuff is enough to take the edge off so that one can resist the urge to harm actual people then by all means.

      • TwilightVulpine@kbin.social
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        You are forgetting the little detail that AI’s output is based on what has been put in it. If a model can output something like that, it’s likely because real CSAM has been fed into it. It’s not sprouting from the aether.

        • Thorny_Thicket
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          Fair point.

          I doubt such content is in the training data but if so then that indeed makes it a more difficult issue.

    • CybranM@kbin.social
      link
      fedilink
      arrow-up
      5
      ·
      1 year ago

      More likely just not spotted, think of it like cockroaches, you dont “allow” cockroaches to live in your house but they very well might until you notice and exterminate them