Visual artists fight back against AI companies for repurposing their work::Three visual artists are suing artificial intelligence image-generators to protect their copyrights and careers.

  • FireTower@lemmy.world
    link
    fedilink
    English
    arrow-up
    55
    arrow-down
    9
    ·
    1 year ago

    It seems pretty obvious to me that the artists should win this assuming their images weren’t poorly licenced. Training AI is absolutely a commercial use.

    These companies adopted a run fast and don’t look back legal strategy and now they’re going to enter the ‘find out’ phase.

    • kava@lemmy.world
      link
      fedilink
      English
      arrow-up
      16
      arrow-down
      7
      ·
      1 year ago

      I don’t think it’s obvious at all. Both legally speaking - there is no consensus around this issue - and ethically speaking because AIs fundamentally function the same way humans do.

      We take in input, some of which is bound to be copyrighted work, and we mesh them all together to create new things. This is essentially how art works. Someone’s “style” cannot be copyrighted, only specific works.

      The government announced an inquiry recently into the copyright questions surrounding AI. They are going to make recommendations to congress about potential legislation, if any, they think would be a good idea. I believe there’s a period of public comment until mid October, if anyone wants to write a comment.

      • MentalEdge
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        4
        ·
        edit-2
        1 year ago

        I really hope you’re wrong.

        And I think there’s a difference. Humans can draw stuff, build structures, and make tools, in a way that improves upon the previous iteration. Each artists adds something, or combines things in a way that makes for something greater.

        AI art, literally cannot do anything, without human training data. It can’t take a previous result, be inspired by it, and make it better. There has to be actual human input, it can’t train itself on its own data, the way humans do. It absolutely does not “work the same way”.

        AI art has NEVER made me feel like it’s greater than the sum of its parts. Unlike art made by humans, which makes me feel that way all the time.

        If a human does art without input, you still get “something”.

        With an AI, you don’t have that. Without the training data, you have nothing.

        • kava@lemmy.world
          link
          fedilink
          English
          arrow-up
          9
          arrow-down
          3
          ·
          edit-2
          1 year ago

          If a human does art without input, you still get “something”.

          Ok, take a human being that has never had any other interactions with any other human and has never consumed any content created by humans. Give him finger paint and have him paint something on a blank canvas. I think it wouldn’t look any different than a chimpanzee doing finger paint.

          it can’t train itself on its own data

          In theory, it could. You would just need a way to quantify the “fitness” of a drawing. They do this by comparing to actual content. But you don’t need actual content in some circumstances. For example, look at Alphazero - Deepmind’s AI from a few years back for playing chess. All the AI knew was the rules of the game. It did not have access to any database of games. No data. The way it learned is it played millions of games against itself.

          It trained itself on its own data. And that AI, at the time, beat the leading chess engine that has access to databases and other pre-built algorithms.

          With art this gets trickier because art is subjective. You can quantify clearly whether you won or lost a chess game. How do you quantify if something is a good piece of art? If we can somehow quantify this, you could in theory create AI that generates art with no input.

          We’re in the infancy stages of this technology.

          Humans can draw stuff, build structures, and make tools, in a way that improves upon the previous iteration. Each artists adds something, or combines things in a way that makes for something greater.

          AI can do all of the same. I know it’s scary but it’s here and it isn’t going away. AI designed systems are becoming more and more commonplace. Solar panels, medical devices, computer hardware, aircraft wings, potential drug compounds, etc. Certain things AI can be really good at, and designing things and testing it in a million different simulations is something that AI can do a lot better than humans.

          AI art has NEVER made me feel like it’s greater than the sum of its parts

          What is art? If I make something that means nothing and you find a meaning in it, is it meaningful? AI is a cold calculated mathematical model that produces meaningless output. But humans love finding patterns in noise.

          Trust me, you will eventually see some sort of AI art that makes an impact on you. Math doesn’t lie. If statistics can turn art into data and find the hidden patterns that make something impactful, then it can recreate it in a way that is impactful.

          • MentalEdge
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            3
            ·
            edit-2
            1 year ago

            The randomness used by current machine learning to train the neural networks, will never be able to do what a human does when they are being creative.

            I have no doubt AI art will be able “say” things. But it wont be saying things, that haven’t already been said.

            And yes, AI can brute force its way to solutions in ways humans cannot beat. But that only works when there is a solution. So AI works with science, engineering, chess.

            Art does not have a “solution”. Every answer is valid. Humans are able to create good art, because they understand the question. “What is it to be human?” “Why are we here?” “What is adulthood?” “Why do I feel this?” “What is innocence?”

            AI does not understand anything. All it is doing is mimicking art already created by humans, and coincidentally sometimes getting it right.

            • kava@lemmy.world
              link
              fedilink
              English
              arrow-up
              5
              ·
              1 year ago

              AI can brute force its way to solutions in ways humans cannot beat

              It’s not brute force. It seems like brute force because trying something millions of times seems impossible to us. But they identify patterns and then use those patterns to create output. It’s learning. It’s why we call it “machine learning”. The mechanics are different than how humans do it, but fundamentally it’s the same.

              The only reason you know what a tree looks like is because you’ve seen a million different trees. Trees in person, trees in movies, trees in cartoons, trees in drawings, etc. Your brain has taken all of these different trees and merged them together in your brain to create an “ideal” of the tree. Sort of like Plato’s “world of forms”

              AI can recognize a tree through the same process. It views millions of trees and creates an “ideal” tree. It can then compare any image it sees against this ideal and determine the probability that it is or isn’t a tree. Combine this with something that randomly pumps out images and you can now compare these generated images with the internal model of a tree and all of a sudden you have an AI that can create novel images of trees.

              It’s fundamentally the same thing we do. It’s creating pictures of trees that didn’t exist before. The only difference is it happens in a statistical model and it happens at a larger and faster scale than humans are capable of.

              This is why the question of AI models having to pay copyright for content it parses is not obvious at all.

              Art does not have a “solution”. Every answer is valid.

              If every answer is valid then you would be sitting here saying that AI art is just as valid as anything else.

              • MentalEdge
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                2
                ·
                edit-2
                1 year ago

                A human can understand what a tree is, after seeing one, maybe two. If you show a human a new species, they can effortlessly fit that into their understanding of reality.

                An AI machine learning network needs to be shown thousands. Often hundreds of thousands. And the way it “learns” is nothing like what humans do. We do not shuffle our neurons around until we get it right. Given good data, we just get it right.

                And even then, you can still make images that aren’t trees which will fool an ML model into saying they are. They work nothing like humans. The similarities are superficial, at best. The resulting model can be compared to a brain, but it is orders of magnitude more static.

                And no, AI art is not a valid answer. To create a valid answer, you must understand the question.

                4 is the correct answer to 2+2. But there is a difference in knowing that, and understanding the math for WHY its correct. AI can create correct answers to the question that is art, but not valid ones. For that, you need the human artist.

                • kava@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  1 year ago

                  For that, you need the human artist

                  Art isn’t defined by the creator, but the observer. I can run a line through a piece of paper and call it art as a joke, but perhaps someone sees some form of message in the line and it impacts them. The meaningless becomes meaningful only because it is viewed through a being that can assign meaning to nonsense.

                  And even then, you can still make images that aren’t trees which will fool an ML model into saying they are

                  You can make an image that isn’t a tree that will fool humans into saying they are. So what?

                  They work nothing like humans. The similarities are superficial, at best We do not shuffle our neurons around until we get it right.

                  Please explain to me how these two things are different.

                  a) human goes through and studies the more than 20,000 works of andy warhol. he is inspired and creates various different artworks in a similar style.

                  b) AI goes through and parses the same 20,000 works on andy warhol. it uses a statistical algorithm to pump out various different artworks in a similar style.

                  What is the difference? Because a) isn’t copyright infringement. You are allowed to take a style and copy it. Only specific works can be copyrighted.

                  You are trying to claim the AI and human learning is different - and it IS different because we are biological and machines are statistical models. You can find a million similarities and a million differences. But specifically, in the context of using copyrighted works to make novel content - what is the difference? To me, it looks identical

                  1- take in data 2- use data to create new things

                  Why should a) be allowed and b) not be allowed?

        • Grimy@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          1 year ago

          I think it’s a mistake to see the software as an independent entity. It’s a tool just like the paintbrush or photoshop. So yes, there isn’t any AI art without the human but that’s true for every single art form.

          The best art is a mix of different techniques and skills. Many digital artists are implementing ai into their workflow and there is definitely depth to what they are making.

        • FooBarrington@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          6
          ·
          1 year ago

          It can’t take a previous result, be inspired by it, and make it better.

          Why do you think so? AI art can take an image and change it in creative ways, just as humans can.

          There has to be actual human input, it can’t train itself on its own data, the way humans do.

          Only an incredibly small amount of humans ever “trained itself” without relying on previous human data. Anyone who has ever seen any piece of artwork wouldn’t qualify.

          AI art has NEVER made me feel like it’s greater than the sum of its parts.

          Art is subjective. I’ve seen great and interesting AI art, and I’ve seen boring and uninspired human art.

          If a human does art without input, you still get “something”.

          Really? Do you have an example for someone who is deaf, blind, mute and can’t feel touch, who became an artist? Because all of those are inputs all humans have since birth.

          • MentalEdge
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            3
            ·
            1 year ago

            I’m talking from a perspective of understanding how machine learning networks work.

            They cannot make something new. By nature, they can only mimic.

            The randomness they use to combine different pieces of work, is not creativeness. It’s brute force. It’s doing the math a million times until it looks right.

            Humans fundamentally do not work that way. When an engineer sees a design, and thinks “I can improve that” they are doing so because they understand the mechanism.

            Modern AIs do not understand anything. They brute force their way to valid output, and in some cases, like with code, science, or an engineering problem, there might be one single best solution, which an AI can find faster than a human.

            But art, DOES NOT HAVE a single correct “solution”.

            • lunarul@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              ·
              1 year ago

              AI is supposed to work with human input. AI is a tool for the artist, not a replacement of the artist. The human artist is the one calling the shots, deciding when the final result is good or when it needs improvement.

              • MentalEdge
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 year ago

                Absolutely.

                Yet a lot of people are sharpening their knives in preparation to cut the artist out of the process.

                • lunarul@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  1 year ago

                  And the difference in results is clearly different. There are people who replaced artists with Photoshop, there are people who replaced artists with AI, and each new tool with firther empower people to try things on their own. If those results are good enough for them then they probably wouldn’t have paid for a good artist anyway.

            • FooBarrington@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              1 year ago

              They cannot make something new. By nature, they can only mimic.

              Explain it to me from a mathematical point of view. How can I know based on the structure of GANs or Transformers that they, by nature, can only mimic? Please explain it mathematically, since you’re referring to their nature.

              The randomness they use to combine different pieces of work, is not creativeness. It’s brute force. It’s doing the math a million times until it looks right.

              This betrays a lack of understanding on your part. What is the difference between creativeness and brute force? The rate of acceptable navigations in the latent space. Transformers and GANs do not brute force in any capacity. Where do you get the idea that they generate millions of variations until they get it right?

              Humans fundamentally do not work that way. When an engineer sees a design, and thinks “I can improve that” they are doing so because they understand the mechanism.

              Define understanding for me. AI can, for example, automatically optimise algorithms (it’s a fascinating field, finding a more efficient implementation without changing results). This should be impossible if you’re correct. Why does it work? Why can they optimise without understanding, and why can’t this be used in other areas?

              Modern AIs do not understand anything. They brute force their way to valid output, and in some cases, like with code, science, or an engineering problem, there might be one single best solution, which an AI can find faster than a human.

              Again, define understanding. They provably build internal models depending on the task you’re training. How is that not a form of understanding?

              But art, DOES NOT HAVE a single correct “solution”.

              Then it seems great that an AI doesn’t always give the same result for the same input, no?

              • MentalEdge
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                1 year ago

                The brute forcing doesn’t happen when you generate the art. It happens when you train the model.

                You fiddle with the numbers until it produces only results that “look right”. That doesn’t make it not brute forcing.

                Human inspiration and creativity meanwhile is an intuitive process. And we understand why 2+2 is four.

                Writing a piece of code that takes two values and sums them, does not mean the code comprehends math.

                In the same way, training a model to generate sound or visuals, does not mean it understands the human experience.

                As for current models generating different result for the same prompt… no. They don’t. They generate variations, but the same prompt won’t get you Dalí in one iteration, then Monet in the next.

                • FooBarrington@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  1
                  ·
                  edit-2
                  1 year ago

                  The brute forcing doesn’t happen when you generate the art. It happens when you train the model.

                  So it’s the same as a human - they also generate art until they get something that “looks right” during training. How is it different when an AI does it?

                  But you’ll have to explain where this brute forcing happens. What are the inputs and outputs of the process? Because the NN doesn’t generate all possible outputs until the correct one is found, which is what brute forcing is. Maybe you could argue that GANs are kinda doing this, but it’s still a very much directed process, which is entirely different from real brute forcing.

                  Human inspiration and creativity meanwhile is an intuitive process. And we understand why 2+2 is four.

                  You’re using more words without defining them.

                  Writing a piece of code that takes two values and sums them, does not mean the code comprehends math.

                  But we’re not writing code to generate art. We’re writing code to train a model to generate art. As I’ve already mentioned, NNs provably can build an accurate model of whatever you’re training - how is this not a form of comprehension?

                  In the same way, training a model to generate sound or visuals, does not mean it understands the human experience.

                  Please prove you need to understand the human experience to be able to generate meaningful art.

                  As for current models generating different result for the same prompt… no. They don’t. They generate variations, but the same prompt won’t get you Dalí in one iteration, then Monet in the next.

                  Of course they can, depending on your prompt and temperature.

            • BURN@lemmy.world
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              4
              ·
              1 year ago

              I’ve tried to explain this to a lot of people on here and they just don’t seem to get it. Art fundamentally relies on human experience for meaning. AI does not replicate that.

              Seems like people on this platform are very engineering focused, and many aren’t artists themselves and see it as a pure commodity instead of a reflection of the artist.

              • Peanut
                link
                fedilink
                English
                arrow-up
                4
                arrow-down
                1
                ·
                1 year ago

                artist here. nobody is thinking about AI as a tool being used… by artists.

                the pareidolia aspect of diffusion specifically does a great job of mimicking the way artists conceptualize an image. it’s not 1 to 1, but to say the models are stealing from the data they were trained on is definitely as silly as claiming an artist was stealing every time they admired or incorporated aspects of other people’s art into their own.

                i’m also all for opensource and publicly available models. if independent artists lose that tool, they will be competing with large corps who can buy all the data they need, and hold exclusive proprietary models while independent artists get nothing.

                ultimately this tech is leading to a holo-deck style of creation, where you can define you vision through direction and language rather than through hands that you’ve already destroyed practicing linework for decades. or through hunting down the right place for a photograph. or having a beach not wash your sandcastle away with the tide.

                there are many aspects to art and creation. A.I. is one more avenue, and it’s a good one. as long as we don’t make it impossible to use without subscribing to the landlords of art tools.

                • MentalEdge
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  arrow-down
                  1
                  ·
                  edit-2
                  1 year ago

                  I absolutely approve of AI tools as a way for artists to empower themselves. Because there is human input.

                  The “best” AI art I’ve seen is the type posted by people who were already drawing before, and are using it as a tool to realise their vision. But that’s the crux of the issue, in these pieces a human conceived the them, the tools used to realize them, don’t matter.

                  But a lot of people are presenting AI as a something that replaces the whole person of an artist. Not a new brush for them to wield in creatively intelligent ways.

              • MentalEdge
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                1
                ·
                1 year ago

                It’s so wierd to see artistic expression reduced to an engineering problem.

                Yeah, you can generate images and sounds, but claiming that’s art is like claiming a thousand monkeys could write the works of Shakespeare. Yes, its possible, but what enables it is randomness. Not creativity.

                And in that process, you created a lot more of something else, aside from the works of Shakespeare.

                • FooBarrington@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  1 year ago

                  I don’t need to know the background of a piece of art to know it’s art. I’ve seen AI generated pieces that touch me, and I’ve seen “real art” that I do not consider art. How can this be if you’re right?

                  The obvious answer is that art isn’t defined by who created it or how it was created, but instead it’s defined by the interpretation of whoever views it. An artist using generative AI to make something great is no less art than if they used a brush and canvas, and a non-artist doing the same doesn’t suddenly make it “not art”.

    • GFGJewbacca@lemm.ee
      link
      fedilink
      English
      arrow-up
      9
      ·
      1 year ago

      I would like to agree with you, but I have doubts this lawsuit will stick because of how prominent corporations are in US law.

      • joe@lemmy.world
        link
        fedilink
        English
        arrow-up
        18
        arrow-down
        2
        ·
        edit-2
        1 year ago

        There’s nothing in copyright law that covers this scenario, so anyone that says it’s “absolutely” one way or the other is telling you an opinion, not a fact.

        • Lmaydev@programming.dev
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          1
          ·
          1 year ago

          It’s like sueing an artist because they learnt to paint based on your paintings. But also not because the company has acquired your art and fed it into an application.

          It’s a very tricky area.

        • dhork@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          2
          ·
          edit-2
          1 year ago

          I’m not so sure that’s true, there have been several recent rulings that all reinforce that copyright can only be asserted on the output of actual humans. This even goes back to before the AI stuff, when PETA sued over those monkey selfies. It is quite clear that the output of an AI does not, itself, qualify for copyright protection, because it is not human.

          Maybe if a human edits or works with the AI output, the end result might qualify. But then you also have to ask about what went into the AI composition. Here is where it gets less certain. The case of the Monkey Selfie is much clearer: the monkey stole the camera and took its own picture, and that creation was not derived from any other copyrighted work. But these AI are trained on a wide range of copyrighted works, and very few of those works were licensed for that purpose. I doubt that sucking everything into AI will be seen as a fair use of those works. This is different than a search engine, which ultimately steers the user toward the original work. This uses the original work to create something new (and inherently uncopyrightable, since a bot did it), and because of the way AI works it is impossible to credit the original sources.

          Congress may have to step in and clarify this, but is probably not interested unless they can use it to harass Hunter Biden.

          • joe@lemmy.world
            link
            fedilink
            English
            arrow-up
            10
            arrow-down
            1
            ·
            1 year ago

            I was under the impression we were talking about using copyright to prevent a work from being used to train a generative model. There’s nothing in copyright that says anything about training anything. I’m not even convinced there should be.

            • dhork@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              1 year ago

              Well, of course there’s nothing that can be used to prevent training an AI, just like there’s nothing preventing monkeys from stealing cameras and taking pictures. It’s what happens next that matters.

              The Internet Archive didn’t get sued over copyright, even though it had electronic copies of lots and lots of copyrighted works (and even let people “check out” copies), until they changed their distribution model to allow unlimited lending. Nothing about how they gathered their works changed, it was the change in distribution that got them sued.

              • joe@lemmy.world
                link
                fedilink
                English
                arrow-up
                3
                arrow-down
                3
                ·
                1 year ago

                The article is literally about someone suing to prevent their art from being used for training. That’s the topic at hand.

                Are you confused, or are you trying to shoehorn a different but related discussion into this one?

                • dhork@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  5
                  arrow-down
                  1
                  ·
                  edit-2
                  1 year ago

                  The suit alleges that the AI image-generators violate the rights of millions of artists by ingesting huge troves of digital images and then producing derivative works that compete against the originals.

                  The artists say they are not inherently opposed to AI, but they don’t want to be exploited by it. They are seeking class-action damages and a court order to stop companies from exploiting artistic works without consent.

                  It says right in the article that they’re suing over the training and the commercial use of the output. Their lawyer obviously felt that it was essential to include both parts of that, and I think it’s because simply using a copyrighted work to train AI may not be infringing, but using it and the selling the output is.

                  I just don’t think you can separate how the AI is trained from what the company intends to do with the trained AI. If they intend to sell their output, then I don’t think that will be allowed in current copyright law.

            • dhork@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              edit-2
              1 year ago

              I think the article is very good and well-written, and the author is probably more knowledgeable on this topic than I am, but I think it’s a glaring omission that they never mention the idea that copyright can only be asserted on the output of humans. Even the latest guidance from the Copyright Office suggests that the raw output of an AI doesn’t qualify under current law, and in order for AI to be copyrightable it needs to have a human apply some creative endeavor to it.

              https://arstechnica.com/tech-policy/2023/03/us-issues-guidance-on-copyrighting-ai-assisted-artwork/

              The author suggests that a ruling that an AI can’t synthesize images from multiple sources might affect human artists who use multiple sources as inspiration. But those humans can look at 5 different paintings, create a 6th which is inspired by (but not identical to) the other 5, and get copyright protection for that, to protect their creative efforts. AI cannot, under current law. So when an AI combines five different paintings, who owns the copyright on it? The Monkey Selfie was ruled to be in the Public Domain. But AI can’t be treated similarly; It seems absurd that you can put art through an AI “copyright wash” and end up with something free of copyright.

              (And it looks like in the latest guidance from the Copyright Office linked above, they say that future applications will require the author to disclose whether they used AI to generate the content, and "Any failure to accurately reflect the role of AI in copyrighted works could result in “losing the benefits of the registration,”)

              Even after reading that well-written article, I stand by my assertion that current copyright law simply doesn’t protect the output of non-humans, and Congress will ultimately have to step in and define parameters for it. Until that happens, artists who can prove their work was used to train an AI have a legitimate case that they are being infringed upon every time an AI makes output that is similar.

              • Even_Adder@lemmy.dbzer0.com
                link
                fedilink
                English
                arrow-up
                3
                ·
                1 year ago

                It’s important to remember the Copyright Office guidance isn’t law. Their guidance reflects only the office’s interpretation based on its experience, it isn’t binding in the courts or other parties. Guidance from the office is not a substitute for legal advice, and it does not create any rights or obligations for anyone. They are the lowest rung on the ladder for deciding what law means.

                The author suggests that a ruling that an AI can’t synthesize images from multiple sources might affect human artists who use multiple sources as inspiration. But those humans can look at 5 different paintings, create a 6th which is inspired by (but not identical to) the other 5, and get copyright protection for that, to protect their creative efforts. AI cannot, under current law. So when an AI combines five different paintings, who owns the copyright on it? The Monkey Selfie was ruled to be in the Public Domain. But AI can’t be treated similarly; It seems absurd that you can put art through an AI “copyright wash” and end up with something free of copyright.

                You said it yourself in the first paragraph, humans using machines have always been the copyright holders of any qualifying work they create. AI works are human works. AI can’t be authors or hold copyright.

                • dhork@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  1 year ago

                  No, the Copyright Office 's statements are not law, but they are the ones who execute the law and who process copyright registrations, so it’s not like their statements are meaningless. They won’t change unless there is litigation that forces a change, or Congress changes the law, or maybe different leadership gets appointed with a different interpretation. Their guidance is all that ordinary copyright registrants can act on, without incurring the expense of a lawsuit (or buying a Senator).

    • Random_Character_A@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      5
      ·
      1 year ago

      This is a tough one, because they are not directly making money from the copyrighted material.

      Isn’t this a bit same as using short samples of somebodys song in your own song or somebody getting inspired from somebodys artwork and creating something similar.

      • FireTower@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        2
        ·
        1 year ago

        If you’re sampling music you aught to be compensating the licence holder unless it’s public domain or your work is under a fair use exception.

        • lunarul@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          2
          ·
          edit-2
          1 year ago

          Sampling music is literally placing parts of that music in the final product. Gen AI is not placing pieces of other people’s art in the final image, in fact it doesn’t store any image data at all. Using an image in the training data is akin to an artist including that image on their moodboard. Except the AI’s moodboard has way more images and the odds of the work being too similar to a single particular image is lower than when a human does it.

        • joe@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          3
          ·
          1 year ago

          Are you speaking legally or morally when you say someone “aught” to do something?

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    2
    ·
    1 year ago

    This is the best summary I could come up with:


    NEW YORK (AP) — Kelly McKernan’s acrylic and watercolor paintings are bold and vibrant, often featuring feminine figures rendered in bright greens, blues, pinks and purples.

    The Nashville-based McKernan, 37, who creates both fine art and digital illustrations, soon learned that companies were feeding artwork into AI systems used to “train” image-generators — something that once sounded like a weird sci-fi movie but now threatens the livelihood of artists worldwide.

    The lawsuit may serve as an early bellwether of how hard it will be for all kinds of creators — Hollywood actors, novelists, musicians and computer programmers — to stop AI developers from profiting off what humans have made.

    The case was filed in January by McKernan and fellow artists Karla Ortiz and Sarah Andersen, on behalf of others like them, against Stability AI, the London-based maker of text-to-image generator Stable Diffusion.

    The teacher, Christoph Schuhmann, said he has no regrets about the nonprofit project, which is not a defendant in the lawsuit and has largely escaped copyright challenges by creating an index of links to publicly accessible images without storing them.

    The idea that such a development is inevitable — that it is, essentially, the future — was at the heart of a U.S. Senate hearing in July in which Ben Brooks, head of public policy for Stability AI, acknowledged that artists are not paid for their images.


    The original article contains 1,215 words, the summary contains 229 words. Saved 81%. I’m a bot and I’m open source!