• eveninghere@beehaw.org
    link
    fedilink
    arrow-up
    1
    arrow-down
    1
    ·
    5 months ago

    They say they the images are merely matched to pre-determined images found on the web. You’re talking about a different scenario where AI detects inappropriate contents in an image.

    • Grippler@feddit.dk
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      5 months ago

      It will detect known images and potential new images…his do you think it will the potential new and unknown images?

        • Grippler@feddit.dk
          link
          fedilink
          arrow-up
          2
          ·
          edit-2
          5 months ago

          Literally the article linked in the OP…

          Article 10a, which contains the upload moderation plan, states that these technologies would be expected “to detect, prior to transmission, the dissemination of known child sexual abuse material or of new child sexual abuse material.”

          • eveninghere@beehaw.org
            link
            fedilink
            arrow-up
            1
            ·
            5 months ago

            My bad. But that phrasing is super stupid, honestly. What company would want to promise to detect new child sex abuse material? Impossible to avoid false negatives.