The policy changes come after an NBC News investigation last month into child safety on the platform.

  • TwilightVulpine@kbin.social
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    You are forgetting the little detail that AI’s output is based on what has been put in it. If a model can output something like that, it’s likely because real CSAM has been fed into it. It’s not sprouting from the aether.

    • Thorny_Thicket
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Fair point.

      I doubt such content is in the training data but if so then that indeed makes it a more difficult issue.