• KISSmyOSFeddit@lemmy.world
    link
    fedilink
    arrow-up
    70
    arrow-down
    3
    ·
    6 months ago

    I feel like this changes nothing. All they did was apply another algorithm to the copyrighted work before feeding it to the AI.
    That algorithm masks the original work so it looks different to the human eye, but the AI still gets what it needs from it, and it still needs the real picture, made by a human who didn’t get credit.

    • kakes@sh.itjust.works
      link
      fedilink
      arrow-up
      21
      arrow-down
      1
      ·
      6 months ago

      Right? Like, by this definition, the training algorithm is already “corrupting” the images by vectorizing them. This is just an overly roundabout way of saying “See? The image is cropped, so we good now!”

  • grue@lemmy.world
    link
    fedilink
    English
    arrow-up
    58
    arrow-down
    3
    ·
    6 months ago

    That’s not how any of this works. Copyright is a legal concept, not a technological one. You can’t strip the copyright off something by deleting part of it; the result is still a derivative work.

    • Hawk@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      22
      arrow-down
      1
      ·
      6 months ago

      It’s not what the paper is about at all, seems this is just shit journalism again.

      All the paper says about copyright is that this method is more secure because AI can sometimes spit out training examples.

      • bitfucker@programming.dev
        link
        fedilink
        arrow-up
        5
        arrow-down
        1
        ·
        6 months ago

        Why… why is it more secure? Does it mean AI training is actively abusing copyright law? And this is more secure because they can hide it better?

        • Hawk@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          2
          ·
          6 months ago

          No, you have it the other way around. It means copyright owners can share “corrupted” versions of their works and the AI can still use it. Possible AI leaks won’t return the original work, since it was never used.

          Of course I think this is only one aspect of why artists wouldn’t share their works, but it’s not the point the paper is trying to make. They’re just giving an aspect of how it could be useful.

  • Zaktor
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    1
    ·
    6 months ago

    Did the image get copied onto their servers in a manner they were not provided a legal right to? Then they violated copyright. Whatever they do after that isn’t the copyright violation.

    And this is obvious because they could easily assemble a dataset with no copyright issues. They could also attempt to get permission from the copyright holders for many other images, but that would be hard and/or costly and some would refuse. They want to use the extra images, but don’t want to get permission, so they just take it, just like anyone else who would like an image but doesn’t want to pay for it.

  • Em Adespoton@lemmy.ca
    link
    fedilink
    arrow-up
    16
    arrow-down
    4
    ·
    6 months ago

    All this really does is show how flawed the current concept of copyright is. But at some point, a huge corpus of images owned by other people was assembled to create a derivative work (the training corpus).

  • SSUPII
    link
    fedilink
    arrow-up
    11
    arrow-down
    1
    ·
    6 months ago

    Shows how useless those “image poisoning” services some artists boast about really are

  • WalnutLum@lemmy.ml
    link
    fedilink
    arrow-up
    7
    arrow-down
    1
    ·
    6 months ago

    Doesn’t this just do what gets done through convolution anyway?

    What’s the point of this.

  • PolandIsAStateOfMind@lemmy.ml
    link
    fedilink
    arrow-up
    9
    arrow-down
    5
    ·
    6 months ago

    I love how copyrights are such a consistent constant hurdle for any new developments are are turning everything they touch into shit.