I’ve recently noticed this opinion seems unpopular, at least on Lemmy.

There is nothing wrong with downloading public data and doing statistical analysis on it, which is pretty much what these ML models do. They are not redistributing other peoples’ works (well, sometimes they do, unintentionally, and safeguards to prevent this are usually built-in). The training data is generally much, much larger than the model sizes, so it is generally not possible for the models to reconstruct random specific works. They are not creating derivative works, in the legal sense, because they do not copy and modify the original works; they generate “new” content based on probabilities.

My opinion on the subject is pretty much in agreement with this document from the EFF: https://www.eff.org/document/eff-two-pager-ai

I understand the hate for companies using data you would reasonably expect would be private. I understand hate for purposely over-fitting the model on data to reproduce people’s “likeness.” I understand the hate for AI generated shit (because it is shit). I really don’t understand where all this hate for using public data for building a “statistical” model to “learn” general patterns is coming from.

I can also understand the anxiety people may feel, if they believe all the AI hype, that it will eliminate jobs. I don’t think AI is going to be able to directly replace people any time soon. It will probably improve productivity (with stuff like background-removers, better autocomplete, etc), which might eliminate some jobs, but that’s really just a problem with capitalism, and productivity increases are generally considered good.

  • Hamartiogonic
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    3
    ·
    edit-2
    4 months ago

    Here’s an analogy that can be used to test this idea.

    Let’s say I want to write a book but I totally suck as an author and I have no idea how to write a good one. To get some guidelines and inspiration, I go to the library and read a bunch of books. Then, I’ll take those ideas and smash them together to produce a mediocre book that anyone would refuse to publish. Anyway, I could also buy those books, but the end result would still be the same, except that it would cost me a lot more. Either way, this sort of learning and writing procedure is entirely legal, and people have been doing this for ages. Even if my book looks and feels a lot like LOTR, it probably won’t be that easy to sue me unless I copy large parts of it word for word. Blatant plagiarism might result in a lawsuit, but I guess this isn’t what the AI training data debate is all about, now is it?

    However, if I pirated those books, that could result in some trouble. However, someone would need to read my miserable book, find a suspicious passage, check my personal bookshelf and everything I have ever borrowed etc. That way, it might be possible to prove that I could not have come up with a specific line of text except by pirating some book. If an AI is trained on pirated data, that’s obviously something worth the debate.

    • wewbull@feddit.uk
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      4 months ago

      You are equating traing an LLM with a person learning, but an LLM is not a person. It is not given the same rights and privileges under the law. At best it is a computer program and you can certainly infringe copyright by writing a program.

      • Specal@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        4 months ago

        It’s not “At best it’s a computer program”. It is a computer program, a program of probability that it’s response should be X. The training data could be stolen, but it’s output isn’t.

      • Hamartiogonic
        link
        fedilink
        English
        arrow-up
        2
        ·
        4 months ago

        An LLM is not a legal entity, nor should it be. However, similar things happen in a human brain and the network of an LLM, so same laws could be applicable to some extent. Where do we draw the line? That’s a legal/political issue we haven’t figured out yet, but following these developments is going to be interesting.

        • wewbull@feddit.uk
          link
          fedilink
          English
          arrow-up
          3
          ·
          3 months ago

          Agreed it hasn’t been settled legally yet.

          I also agree that an LLM isn’t and shouldn’t be a legal entity. Therefore an LLM is something that can be owned, sold, and a profit made from.

          It is my opinion that the original author of the works should receive compensation when their work is used to make profit i.e. to make the LLM. I’d also say that the original intent of copyright law was to give authors protection from others making money from their work without permission.

          Maybe current copyright law isn’t up to the job here, but benefiting of the back of others creative works is not socially acceptable in my opinion.

          • Hamartiogonic
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            edit-2
            3 months ago

            I think of an LLM as a tool, just like drill or a hammer. If you buy or rent these tools, you pay the tool company. If you use the tools to build something, your client pays you for that work.

            Similarly, OpenAI can charge me for extensive use of ChatGPT. I can use that tool to write a book, but it’s not 100% AI work. I need to spend several hours prompt crafting, structuring, reading and editing the book in order to make something acceptable. I don’t really act as a writer in this workflow, but more like an editor or a publisher. When I publish and sell my book, I’m entitled to some compensation for the time and effort that I put into it. Does that sound fair to you?

            • wewbull@feddit.uk
              link
              fedilink
              English
              arrow-up
              3
              ·
              edit-2
              3 months ago

              Yes of course you are.

              …but do you agree that if you use an AI in that way that you are benefitting from another author’s work? You may even, unknowingly, violate the copyright of the original author. You can’t be held liable for that infringement because you did it unwittingly. OpenAI, or whoever, must bare responsibility for that possible outcome through the use of their tool.

              • Hamartiogonic
                link
                fedilink
                English
                arrow-up
                1
                ·
                3 months ago

                Yes, it’s true that countless authors contributed to the development of this LLM, but they were not compensated for it in any way. Doesn’t sound fair.

                Can we compare this to some other situation where the legal status has already been determined?

                • wewbull@feddit.uk
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  3 months ago

                  I was thinking about money laundering when I wrote my response, but I’m not sure it’s a good analogy. It still feels to me like constructing a generative model is a form of “Copyright washing”.

                  Fact is, the law has yet to be written.

    • wildncrazyguy138@fedia.io
      link
      fedilink
      arrow-up
      4
      arrow-down
      1
      ·
      edit-2
      4 months ago

      To expand on what you wrote, I’d equate the LLM output as similar to me reading a book. From here on out until I become senile, the book is part of memory. I may reference it, I may parrot some of its details that I can remember to a friend. My own conversational style and future works may even be impacted by it, perhaps even subconsciously.

      In other words, it’s not as if a book enters my brain and then is completely gone once I’m finished reading it.

      So I suppose then, that the question is moreso one of volume. How many works consumed are considered too many? At what point do we shift from the realm of research to the one of profiteering?

      There are a certain subset of people in the AI field who believe that our brains are biological forms of LLMs, and that, if we feed an electronic LLM enough data, it’ll essentially become sentient. That may be for better or worse to civilization, but I’m not one to get in the way of wonder building.

      • Hamartiogonic
        link
        fedilink
        English
        arrow-up
        3
        ·
        4 months ago

        A neural network (the machine learning technology) aims to imitate the function to normal neurons in a human brain. If you have lots of these neurons, all sorts of interesting phenomena begin to emerge, and consciousness might be one of them. If/when we get to that point, we’ll also have to address several of legal and philosophical questions. It’s going to be a wild ride.