In addition to the possible business threat, forcing OpenAI to identify its use of copyrighted data would expose the company to potential lawsuits. Generative AI systems like ChatGPT and DALL-E are trained using large amounts of data scraped from the web, much of it copyright protected. When companies disclose these data sources it leaves them open to legal challenges. OpenAI rival Stability AI, for example, is currently being sued by stock image maker Getty Images for using its copyrighted data to train its AI image generator.

Aaaaaand there it is. They don’t want to admit how much copyrighted materials they’ve been using.

    • Big P@feddit.uk
      link
      fedilink
      arrow-up
      29
      ·
      1 year ago

      You wouldn’t be saying that if it was your content that was being ripped off

        • Kichae@kbin.social
          link
          fedilink
          arrow-up
          17
          ·
          1 year ago

          That’s, uh, exactly how they work? They need large amounts of training data, and that data isn’t being generated in house.

          It’s being stolen, scraped from the internet.

          • Chozo@kbin.social
            link
            fedilink
            arrow-up
            3
            ·
            1 year ago

            If it was publicly available on the internet, then it wasn’t stolen. OpenAI hasn’t been hacking into restricted content that isn’t meant for public consumption. You’re allowed to download anything you see online (technically, if you’re seeing it, you’ve already downloaded it). And you’re allowed to study anything you see online. Even for personal use. Even for profit. Taking inspiration from something isn’t a crime. That’s allowed. If it wasn’t, the internet wouldn’t function at a fundamental level.

            • HeartyBeast@kbin.social
              link
              fedilink
              arrow-up
              4
              ·
              1 year ago

              I don’t think you understand how copyright works. Something appearing on the internet doesn’t give you automatic full commercial rights to it.

              • Chozo@kbin.social
                link
                fedilink
                arrow-up
                1
                ·
                1 year ago

                An AI has just as much right to web scrape as you do. It’s not a violation of copyright to do so.

                  • Chozo@kbin.social
                    link
                    fedilink
                    arrow-up
                    1
                    ·
                    1 year ago

                    It’s the same thing. Just because you have personal opinions on the matter, however valid they may be, doesn’t make it any less the exact same thing.

                    That’s like saying that McDonald’s Super Sized fries aren’t fries because they’re commercially large. No, it’s still fries, there’s just a lot of fries being processed in one serving. And yet, despite the arguments and outcries of many, still legal.

                    Exact same thing with LLMs.

              • nogooduser@lemmy.world
                link
                fedilink
                arrow-up
                0
                ·
                1 year ago

                But Google and Bing do that too. They scrape all the internet that they can get to so that they can sell ads (with a few steps in between)

        • Niello@kbin.social
          link
          fedilink
          arrow-up
          16
          ·
          edit-2
          1 year ago

          if you read a copyrighted material without paying and then forgot most of it a month later with vague recollection of what you’ve read the fact is you still accessed and used the copyrighted material without paying.

          Now let’s go a step further, you write something that is inspired by that copyrighted material and what you wrote become successful to some degree with eyes on it, but you refuse to admit that’s where you got the idea from because you only have a vague recollection. The fact is you got the idea from the copyrighted material.

            • nicetriangle@kbin.social
              link
              fedilink
              arrow-up
              13
              ·
              1 year ago

              Except that nobody has a superhuman ability to create endless amounts of content almost instantly based on said work.

              People throw this “artists/writers use inspiration to create X” argument all the time and it just totally ignores the fact that we’re not talking about some person spending 10s/100s/1000s of hours of their time to copy someone’s working style.

              It’s a piece of software churning it out in seconds.

              • exscape@kbin.social
                link
                fedilink
                arrow-up
                3
                ·
                1 year ago

                Do generative AI models typically focus on ONE person’s style? Don’t they mix together influences from thousands of artists?

                FWIW this is not an area I read up on, and so I don’t have a strong opinion one way or the other.

                • volkrom@kbin.social
                  link
                  fedilink
                  arrow-up
                  2
                  ·
                  1 year ago

                  For the image generating ones like Midjourney you could ask for an artist’s style by putting their name in the prompt.
                  It probably works the same in OpenAI.

              • Tarte@kbin.social
                link
                fedilink
                arrow-up
                2
                ·
                edit-2
                1 year ago

                If I would create a very slow AI that takes 10 or 100 hours for each response, would that make it any better in your opinion? I do not think calculation speed of a software is a good basis for legislation.

                If analyzing a piece of art and replicating parts of it without permission is illegal, then it should be illegal regardless of the tools used. However, that would make every single piece of art illegal, so it’s not an option. If we make only the digital tools illegal then the question still remains where to draw the line. How much inefficiency is required for a tool to still be considered legal?

                Is Adobe Photoshop generative auto-fill legal?
                Is translating with deepl.com or the modern Google Translate equivalent legal?
                Are voice activated commands on your mobile phone legal (Cortana, Siri, Google)?

                All of these tools were trained in similar ways. All of these take away jobs (read: make work/life more efficient).

                It’s hard to draw a line and I don’t have any solution to offer.

            • Niello@kbin.social
              link
              fedilink
              arrow-up
              9
              ·
              edit-2
              1 year ago

              Except the illegally obtaining the copyrighted material part, which is the main point. And definitely not on this scale.

            • BraveSirZaphod@kbin.social
              link
              fedilink
              arrow-up
              2
              ·
              1 year ago

              I think there can be said to be a meaningful difference due to the sheer scale and speed at which AIs can do this though.

              Ultimately, I think it’s less of a direct legal question and more a societal question of whether or not we think this is fair or not. I’d expect it to ultimately be resolved by legislative bodies, not the courts.

          • Chozo@kbin.social
            link
            fedilink
            arrow-up
            6
            ·
            1 year ago

            That’s still not how LLMs work. I can’t believe everybody who is upset with them doesn’t understand this.

            The LLM has no idea what it’s reading. None. It’s just doing a word association game, but at a scale we can’t comprehend. It knows what arrangement of words go together, but it’s not reproducing anything with any actual intent. To get it to actually output anything that actually resembles a single piece of material it was trained against would require incredibly specific prompts to get there, and at that point it’s not really the LLM’s making anymore.

            There’s plenty of reasons to be against AI. Such as the massive amounts of data scraping that happens to train models, the possible privacy invasions that come with that, academic cheating, etc. But to be mad at AI for copyright infringement only shows a lack of understanding of what these systems actually do.

            • magic_lobster_party@kbin.social
              link
              fedilink
              arrow-up
              5
              ·
              1 year ago

              The training process of LLMs is to copy the source material word for word. It’s instructed to plagiarize during the training process. The copyrighted material are possibly in one way or another embedded into the model itself.

              In machine learning, there’s always this concern whether the model is actually learning patterns, or if it’s just memorizing the training data. Same applies to LLMs.

              Can LLMs recite entire pieces of work? Who knows?

              Does it count as copyright infringement if it does so? Possibly.

              • ReCursing@kbin.social
                link
                fedilink
                arrow-up
                7
                ·
                1 year ago

                The training process of LLMs is to copy the source material word for word. It’s instructed to plagiarize during the training process. The copyrighted material are possibly in one way or another embedded into the model itself.

                No it isn’t. That;s not how neural networks work, like at all

                In machine learning, there’s always this concern whether the model is actually learning patterns, or if it’s just memorizing the training data. Same applies to LLMs.

                It’s learning patterns. It’s not memorising training data. Again, not how the system works at all

                Can LLMs recite entire pieces of work? Who knows?

                No. No they can’t.

                Does it count as copyright infringement if it does so? Possibly.

                That’d be one for the lawyers were it to ever come up, but it won’t

                • magic_lobster_party@kbin.social
                  link
                  fedilink
                  arrow-up
                  3
                  ·
                  edit-2
                  1 year ago

                  Here’s a basic description of how (a part of) LLMs work: https://huggingface.co/learn/nlp-course/chapter1/6

                  LLMs are generating texts word for word (or token by token if you’re pedantic). This is why ChatGPT is slowly generating the response word by word instead of giving you the entire response at once.

                  Same applies during the training phase. It gets a piece of text and the word it’s supposed to predict. Then it’s tuned to improve its chances to predict the right word based on the text it’s given.

                  Ideally it’s supposed to make predictions by learning the patterns of the language. This is not always the case. Sometimes it can just memorize the answer instead of learning why (just like how a child can memorize the multiplication table without understanding multiplication). This is formally known as overfitting, which is a machine learning 101 concept.

                  There are ways to mitigate overfitting, but there’s no silver bullet solution. Sometimes it cannot help to memorize the training data.

                  When GitHub Copilot was new people quickly figured out it could generate the fast inverse square root implementation from Quake. Word for word. Including the “what the fuck” comment. It had memorized it completely.

                  I’m not sure how much OpenAI has done to mitigate this issue. But it’s a thing that can happen. It’s not imaginary.

        • 00@kbin.social
          link
          fedilink
          arrow-up
          18
          ·
          1 year ago

          Exactly this. I hate copyright as much as the next person and find it funny when corporate meddling leads to them fighting each other, but both sides of this leads to shitty precedent. While copyright enforcement already is a shitty precedent, its something we can fight. AI companies laundering massive amounts of data without having to hold up copyright could possibly lead to them also not having to abide to privacy laws in the future with similar arguments. Correct me if im wrong.

            • 00@kbin.social
              link
              fedilink
              arrow-up
              1
              ·
              1 year ago

              Not entirely. I do think that if copyright holders have an argument against AI data scraping, privacy watchdogs will have one as well. But if copyright holders don’t have one, the position of privacy watchdogs will be weaker as well. Mind you, I’m not arguing about legality, I’m fully aware that those are two very different things from a legal perspective. Im arguing from a perspective of policy narratives.

        • nicetriangle@kbin.social
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          yeah I’ll just wait here patiently until they share their source code and all the contents of their black box of data

    • Ferk@kbin.social
      link
      fedilink
      arrow-up
      25
      ·
      edit-2
      1 year ago

      Note that what the EU is requesting is for OpenAI to disclose information, nobody says (yet?) that they can’t use copyrighted material, what they are asking is for OpenAI to be transparent with sharing the training method, and what material is being used.

      The problem seems to be that OpenAI doesn’t want to be “Open” anymore.

      In March, Open AI co-founder Ilya Sutskever told The Verge that the company had been wrong to disclose so much in the past, and that keeping information like training methods and data sources secret was necessary to stop its work being copied by rivals.

      Of couse, disclosing openly what materials are being used for training might leave them open for lawsuits, but whether or not it’s legal to use copyrighted material for training is something that is still in the air, so it’s a risk either way, whether they disclose it or not.

      • 00@kbin.social
        link
        fedilink
        arrow-up
        13
        ·
        1 year ago

        and that keeping information like training methods and data sources secret was necessary to stop its work being copied by rivals.

        Cant have others copying stuff that you have painstakingly copied yourself.

      • nicetriangle@kbin.social
        link
        fedilink
        arrow-up
        4
        ·
        1 year ago

        They seem really intent on having their cake and eating it too.

        a) we’re not violating the letter or spirit of copyright laws

        b) disclosing our data could open us up to a ton of IP lawsuits

        hmm

    • PabloDiscobar@kbin.social
      link
      fedilink
      arrow-up
      8
      ·
      edit-2
      1 year ago

      Your first comment and it is to support OpenAI.

      edit:

      Haaaa, OpenAI, this famous hippies led, non-profit firm.

      2015–2018: Non-profit beginnings

      2019: Transition from non-profit

      Funded by Musk and Amazon. The friends of humanity.

      Also:

      In March, Open AI co-founder Ilya Sutskever told The Verge that the company had been wrong to disclose so much in the past, and that keeping information like training methods and data sources secret was necessary to stop its work being copied by rivals.

      Yeah, he closed the source code because he was afraid he would get copied by other people.

      • Chozo@kbin.social
        link
        fedilink
        arrow-up
        9
        ·
        1 year ago

        With replies like this, it’s no wonder he was hesitant to post in the first place.

        There’s no need for the hostility and finger pointing.

      • nicetriangle@kbin.social
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        keeping information like training methods and data sources secret was necessary to stop its work being copied by rivals.

        I feel like the AI model is going to become self aware before people like Sutskever do

        • Oswald_Buzzbald@kbin.social
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          Someone should just create an actual open source LLM, that can learn and replicate the innovations of all the others, and then just use these companies’ arguments about copyright against them.

    • teolan@kbin.social
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      But OpenAI’s models are proprietary. I’m a bit with Stable Diffusion since the models are open, but fuck OpenAI. OpenAI is not in favor of reduced copyrights. They are in favor of not being negatively affected by copyright, but still benefit from it.