A new tool lets artists add invisible changes to the pixels in their art before they upload it online so that if it’s scraped into an AI training set, it can cause the resulting model to break in chaotic and unpredictable ways.

The tool, called Nightshade, is intended as a way to fight back against AI companies that use artists’ work to train their models without the creator’s permission.
[…]
Zhao’s team also developed Glaze, a tool that allows artists to “mask” their own personal style to prevent it from being scraped by AI companies. It works in a similar way to Nightshade: by changing the pixels of images in subtle ways that are invisible to the human eye but manipulate machine-learning models to interpret the image as something different from what it actually shows.

  • V H@lemmy.stad.social
    link
    fedilink
    English
    arrow-up
    33
    arrow-down
    3
    ·
    1 year ago

    Trying to detect poisoned images is the wrong approach. Include them in the training set and the training process itself will eventually correct for it.

    I think if you build more robust features

    Diffusion approaches etc. do not involve any conscious “building” of features in the first place. The features are trained by training the net to match images with text features correctly, and then “just” repeatedly predict how to denoise an image to get closer to a match with the text features. If the input includes poisoned images, so what? It’s no different than e.g. compression artifacts, or noise.

    These tools all try to counter models trained without images using them in the training set with at most fine-tuning, but all they show is that models trained without having seen many images using that particular tool will struggle.

    But in reality, the massive problem with this is that we’d expect any such tool that becomes widespread to be self-defeating, in that they become a source for images that will work their way into the models at a sufficient volume that the model will learn them. In doing so they will make the models more robust against noise and artifacts, and so make the job harder for the next generation of these tools.

    In other words, these tools basically act like a manual adversarial training source, and in the long run the main benefit coming out of them will be that they’ll prod and probe at failure modes of the models and help remove them.

    • RubberElectrons@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 year ago

      Just to start with, not very experienced with neural networks at all beyond messing with openCV for my graduation project.

      Anyway, that these countermeasures expose “failure modes” in the training isn’t a great reason to stop doing this, e.g. scammers come up with a new technique, we collectively respond with our own countermeasures.

      If the network feedbacks itself, then cool! It has developed its own style, which is fine. The goal is to stop people from outright copying existing artists style.

      • V H@lemmy.stad.social
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 year ago

        It doesn’t need to “develop its own style”. That’s the point. The more examples of these adversarial images are in the training set, the better it will learn to disregard the adversarial modifications, and still learn the same style. As much as you might want to stop it from learning a given style, as long as the style can be seen, it can be copied - both by humans and AI’s.