The policy changes come after an NBC News investigation last month into child safety on the platform.

  • TwilightVulpine
    link
    fedilink
    21 year ago

    You are forgetting the little detail that AI’s output is based on what has been put in it. If a model can output something like that, it’s likely because real CSAM has been fed into it. It’s not sprouting from the aether.

    • @Thorny_Thicket
      link
      11 year ago

      Fair point.

      I doubt such content is in the training data but if so then that indeed makes it a more difficult issue.