• lily33@lemm.ee
    link
    fedilink
    English
    arrow-up
    67
    arrow-down
    27
    ·
    edit-2
    1 year ago

    No.

    • A pen manufacturer should not be able to decide what people can and can’t write with their pens.
    • A computer manufacturer should not be able to limit how people use their computers (I know they do - especially on phones and consoles - and seem to want to do this to PCs too now - but they shouldn’t).
    • In that exact same vein, writers should not be able to tell people what they can use the books they purchased for.

    .

    We 100% need to ensure that automation and AI benefits everyone, not a few select companies. But copyright is totally the wrong mechanism for that.

    • BURN@lemmy.world
      link
      fedilink
      English
      arrow-up
      46
      arrow-down
      12
      ·
      1 year ago

      A pen is not a creative work. A creative work is much different than something that’s mass produced.

      Nobody is limiting how people can use their pc. This would be regulations targeted at commercial use and monetization.

      Writers can already do that. Commercial licensing is a thing.

      • lily33@lemm.ee
        link
        fedilink
        English
        arrow-up
        18
        arrow-down
        7
        ·
        1 year ago

        Nobody is limiting how people can use their pc. This would be regulations targeted at commercial use and monetization.

        … Google’s proposed Web Integrity API seems like a move in that direction to me.

        But that’s besides the point, I was trying to establish the principle that people who make things shouldn’t be able to impose limitations on how these things are used later on.

        A pen is not a creative work. A creative work is much different than something that’s mass produced.

        Why should that difference matter, in particular when it comes to the principle I mentioned?

        • Rottcodd@kbin.social
          link
          fedilink
          arrow-up
          12
          arrow-down
          2
          ·
          1 year ago

          Why should that difference matter, in particular when it comes to the principle I mentioned?

          Because creative works are rather obviously fundamentally different from physical objects, in spite of a number of shared qualities.

          Like physical objects, they can be distinguished one from another - the text of Moby Dick is notably different from the text of Waiting for Godot, for instance

          More to the point, like physical objects, they’re products of applied labor - the text of Moby Dick exists only because Herman Melville labored to bring it into existence.

          However, they’re notably different from physical objects insofar as they’re quite simply NOT physical objects. The text of Moby Dick - the thing that Melville labored to create - really exists only conceptually. It’s of course presented in a physical form - generally as a printed book - but that physical form is not really the thing under consideration, and more importantly, the thing to which copyright law applies (or in the case of Moby Dick, used to apply). The thing under consideration is more fundamental than that - the original composition.

          And, bluntly, that distinction matters and has to be stipulated because selectively ignoring it in order to equivocate on the concept of rightful property is central to the NoIP position, as illustrated by your inaccurate comparison to a pen.

          Nobody is trying to control the use of pens (or computers, as they were being compared to). The dispute is over the use of original compositions - compositions that are at least arguably, and certainly under the law, somebody else’s property.

        • walrusintraining@lemmy.world
          link
          fedilink
          English
          arrow-up
          14
          arrow-down
          8
          ·
          edit-2
          1 year ago

          It’s not like AI is using works to create something new. Chatgpt is similar to if someone were to buy 10 copies of different books, put them into 1 book as a collection of stories, then mass produce and sell the “new” book. It’s the same thing but much more convoluted.

          Edit: to reply to your main point, people who make things should absolutely be able to impose limitations on how they are used. That’s what copyright is. Someone else made a song, can you freely use that song in your movie since you listened to it once? Not without their permission. You wrote a book, can I buy a copy and then use it to make more copies and sell? Not without your permission.

          • PupBiru@kbin.social
            link
            fedilink
            arrow-up
            7
            arrow-down
            1
            ·
            1 year ago

            it’s not even close to that black and white… i’d say it’s a much more grey area:

            possibly that you buy a bunch of books by the same author and emulate their style… that’s perfectly acceptable until you start using their characters

            if you wrote a research paper about the linguistic and statistical information that makes an authors style, that also wouldn’t be a problem

            so there’s something beyond just the authors “style” that they think is being infringed. we need to sort out exactly where the line is. what’s the extension to these 2 ideas that makes training an LLM a problem?

            • walrusintraining@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              2
              ·
              edit-2
              1 year ago

              No, someone emulating someone else’s style is still going to have their own experiences, style, and creativity make their way into the book. They have an entire lifetime of “training data” to draw from. An AI that would “emulate” someone else’s style would really only be able to refer to the author’s books, or someone else’s books, therefore it’s stealing. Another example: if someone decided to remix different parts of a musician’s catalogue into one song, that would be a copyright infringement. AI adds nothing beyond what it’s trained on, therefore whatever it spits out is just other people’s works in a different way.

              • PupBiru@kbin.social
                link
                fedilink
                arrow-up
                2
                arrow-down
                1
                ·
                1 year ago

                we output nothing other than what we’re trained on; the only difference is that we’re allowed to roam the world freely and consume whatever information we stumble on

                what you say would be true if the LLM were only trained on content by the author seeking to say that their works had been infringed, however these LLMs include a lot of other data from public domain sources

                one could consider these public domain sources and our experience of the world to be synonymous (and if you don’t i’d love to hear the distinction), in which case there’s some kind of a line that you seem to be drawing, and again i’d love to hear where you think that line is

                is it just ratio? there’s precedent to that for sure: current law has fair use rules which stipulate things like “amount and substantiality”. in that case the question becomes one of defining the ratio. certainly the ratio of content that the author is referring to vs the content not trained by the author is minuscule

                • walrusintraining@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  1 year ago

                  I agree with what you’re saying, and a model that is only trained on public domain would be fine. I think the very obvious line is that it’s a computer program. There seems to be a want for computers to be human but they aren’t. They don’t consume media for their own enjoyment, they are forced to do it so someone can sell the output as a product. You can’t compare the public domain to life.

                  • PupBiru@kbin.social
                    link
                    fedilink
                    arrow-up
                    2
                    ·
                    1 year ago

                    i think the distinction that either side is seeing here is that you think humans are inherently different to a neural network, where i think that the only difference is in the complexity: that if we had a neural network at the same scale as the human brain, that there’s nothing stopping those electronic neurons from connecting and responding in a way that’s indistinguishable from a human

                    the fact that we’re not there yet i don’t see as particularly relevant, because we’re talking about concepts rather than specifics… of course a LLM doesn’t display the same characteristics as a human: it’s not of the same scale, and the training is different but functionally there’s nothing different between chemical neurons firing and neurons made of transistors firing

                    we learn in the same way: by reinforcing connections between our neurons

          • lily33@lemm.ee
            link
            fedilink
            English
            arrow-up
            6
            arrow-down
            4
            ·
            1 year ago

            Except it’s not a collection of stories, it’s an amalgamation - and at a very granular level at that. For instance, take the beginning of a sentence from the middle of first book, then switch to a sentence in the 3-rd, then finish with another part of the original sentence. Change some words here and there, add one for good measure (based on some sentence in the 7-th book). Then fix the grammar. All the while, keeping track that there’s some continuity between the sentences you’re stringing together.

            That counts as “new” for me. And a lot of stuff humans do isn’t more original.

            • legion02@lemmy.world
              link
              fedilink
              English
              arrow-up
              6
              arrow-down
              2
              ·
              1 year ago

              The maybe bigger argument against free-reign training is that you’re attributing personal rights to a language model. Also even people aren’t completely free to derive things from memory (legally) which is why clean-room-design is a thing.

            • walrusintraining@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 year ago

              I’ve coded LLMs, I was just simplifying it because at its base level it’s not that different. It’s just much more convoluted as I said. They’re essentially selling someone else’s work as their own, it’s just run through a computer program first.

              • PupBiru@kbin.social
                link
                fedilink
                arrow-up
                2
                ·
                1 year ago

                it’s nothing like that at all… if someone bought a book and produced a big table of words and the likelihood that the next word would be followed by another word, that’s what we’re talking about: it’s abstract statistics

                actually, that’s not even what we’re talking about… we then take that word table and then combine it with hundreds of thousands of other tables until the original is so far from the original as to be completely untraceable back to the original work

                • walrusintraining@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  1 year ago

                  If it were trained on a single book, the output would be the book. That’s the base level without all the convolution and that’s what we should be looking at. Do you also think someone should be able to train a model on your appearance and use it to sell images and videos, even though it’s technically not your likeness?

        • BURN@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          3
          ·
          edit-2
          1 year ago

          Google web integrity is very much different than what I’m proposing. “Nobody” was more in relation to regulating this.

          I hold the opposite opinion in that creatives (I’d almost say individuals only, no companies) own all rights to their work and can impose any limitations they’d like on (edit: commercial) use. Current copyright law doesn’t extend quite that far though.

          A creative work is not a reproduceable quantifiable product. No two are exactly alike until they’re mass produced.

          Your analogy works more with a person rather than a pen, in that why is it ok when a person reads something and uses it as inspiration and not a computer? This comes back around to my argument about transformative works. An AI cannot add anything new, only guess based on historical knowledge. One of the best traits of the human race is our ability to be creative and bring completely new ideas.

          Edit: added in a commercial use specifier after it was pointed out that the rules over individuals would be too restrictive.

          • lily33@lemm.ee
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            2
            ·
            edit-2
            1 year ago

            I hold the opposite opinion in that creatives (I’d almost say individuals only, no companies) own all rights to their work and can impose any limitations they’d like on use. Current copyright law doesn’t extend quite that far though.

            I think that point’s worth discussing by itself - leaving aside the AI - as you wrote it quite general.

            I came up with some examples:

            • Let’s say an author really hates when quotes are taken out of context, and has stipulated that their book must only appear in whole. Do you think I should be able to decorate the interior of my own room with quotes from it?
            • What about an author that insists readers read no more than one chapter per day, to force them to think on the chapter before moving in. Would that be a valid use restriction?
            • If an author wrote a book to critique capitalism - and insists that is its purpose. But when I read the book, I interpreted it very differently, and saw in its pages a very strong argument for capitalism. Should I be able to use said book to make said argument for capitalism?

            Taking your statement at face value - the answers should be: no (I can’t decorate), yes (it’s a valid restriction), and no (I can’t use it to illustrate my argument). But maybe you didn’t mean it quite that strict? What do you think on each example and why?

            • BURN@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              1 year ago

              Fair points. I think the restrictions in most part would have to be in place for commercial use primarily.

              So under your examples

              • Yes, you should. As there’s no commercial usage you’re not profiting off of their work, you’re simply using your copy of it to decorate a personal space

              • If we restrict the copyright protections to only apply to commercial use then this becomes a non-issue. The copyright extends to reproduction (or performance in the case of music) of the work in any kind, but does not extend to complete control over personal usage.

              • Personal interpretation is fine. If you start using that argument in some kind of publication or “performance”, then you end up with fair use being called into question. Quoting, with appropriate attribution is fine, but say you print a chapter of the book, then a chapter of critique. Where is that line drawn? Right now it’s ambiguous at best, downright invisible at most times.

              I appreciate the well thought out response. I hold sting views on copyright of an individuals creative work as a musician and developer, and believe that they should have control over how their products are used to make money. These views probably are a little too restrictive for the general public, and probably won’t ever garner a huge amount of support.

              I dropped the ball on making sure to specify use as in commercial use, I’ll put an edit at the bottom of the op to clarify it too

        • yokonzo@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          3
          ·
          1 year ago

          I can see your argument it’s just your metaphor wasn’t very strong and I think it just made things a bit confusing

    • DarkWasp@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      5
      ·
      edit-2
      1 year ago

      All of the examples you listed have nothing to do with how OpenAI was created and set up. It was trained on copyrighted work, how is that remotely comparable to purchasing a pen?

    • fkn@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      4
      ·
      1 year ago

      You made two arguments for why they shouldn’t be able to train on the work for free and then said that they can with the third?

      Did openai pay for the material? If not, then it’s illegal.

      Additionally, copywrite and trademarks and patents are about reproduction, not use.

      If you bought a pen that was patented, then made a copy of the pen and sold it as yours, that’s illegal. This is the analogy of what openai is going with books.

      Plagiarism and reproduction of text is the part that is illegal. If you take the “ai” part out, what openai is doing is blatantly illegal.

      • lily33@lemm.ee
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        1
        ·
        edit-2
        1 year ago

        Just now, I tried to get Llama-2 (I’m not using OpenAI’s stuff cause they’re not open) to reproduce the first few paragraphs of Harry Potter and the philosophers’ stone, and it didn’t work at all. It created something vaguely resembling it, but with lots of made-up stuff that doesn’t make much sense. I certainly can’t use it to read the book or pirate it.

        • fkn@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          5
          ·
          edit-2
          1 year ago

          Openai:

          I’m sorry, but I can’t provide verbatim excerpts from copyrighted texts. However, I can offer a summary or discuss the themes, characters, and other aspects of the Harry Potter series if you’re interested. Just let me know how you’d like to proceed!

          That doesn’t mean the copyrighted material isn’t in there. It also doesn’t mean that the unrestricted model can’t.

          Edit: I didn’t get it to tell me that it does have the verbatim text in its data.

          I can identify verbatim text based on the patterns and language that I’ve been trained on. Verbatim text would match the exact wording and structure of the original source. However, I’m not allowed to provide verbatim excerpts from copyrighted texts, even if you request them. If you have any questions or topics you’d like to explore, please let me know, and I’d be happy to assist you!

          Here we go, I can get chat gpt to give me sentence by sentence:

          “Mr. and Mrs. Dursley, of number four, Privet Drive, were proud to say that they were perfectly normal, thank you very much.”

          • BURN@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            1
            ·
            1 year ago

            Most publically available/hosted (self hosted models are an exception to this) have an absolute laundry list of extra parameters and checks that are done on every query to limit the model as much as possible to tailor the outputs.

          • fkn@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            5
            ·
            1 year ago

            This wasn’t even hard… I got it spitting out random verbatim bits of Harry Potter. It won’t do the whole thing, and some of it is garbage, but this is pretty clear copyright violations.

        • ShittyBeatlesFCPres@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          4
          ·
          1 year ago

          Maybe it’s trained not to repeat JK Rowling’s horseshit verbatim. I’d probably put that in my algorithm. “No matter how many times a celebrity is quoted in these articles, do not take them seriously. Especially JK Rowling. But especially especially Kanye West.”

          • FaceDeer@kbin.social
            link
            fedilink
            arrow-up
            2
            arrow-down
            2
            ·
            1 year ago

            It’s not repeating its training data verbatim because it can’t do that. It doesn’t have the training data stored away inside itself. If it did the big news wouldn’t be AI, it would be the insanely magical compression algorithm that’s been discovered that allows many terabytes of data to be compressed down into just a few gigabytes.

            • Hello Hotel@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              1 year ago

              Do you remember quotes in english ascii /s

              Tokens are even denser than ascii. simmlar to word “chunking” My guess is it’s like lossy video compression but for text, [Attacked] with [lazers] by [deatheaters] apon [margret];[has flowery language]; word [margret] [comes first] (Theoretical example has 7 “tokens”)

              It may have actually impressioned a really good copy of that book as it’s lilely read it lots of times.

      • FaceDeer@kbin.social
        link
        fedilink
        arrow-up
        9
        arrow-down
        9
        ·
        edit-2
        1 year ago

        Did openai pay for the material? If not, then it’s illegal.

        You are reading my comment right now. In my comment, I am letting you know that Sidehill Gougers come in both clockwise and counterclockwise breeds.

        Oh no! You just learned that fact for free! I didn’t give you permission to learn from my comments, even though I deliberately published it here for you to read. I demand that you either pay me or wipe that ill-gotten knowledge from your mind.

        Don’t you dare tell anyone else about Sidehill Gougers. That’s illegal.

    • QHC@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      2
      ·
      1 year ago

      Computer manufacturers aren’t making AI software. If someone uses an HP copier to make illegal copies of a book and then distributes those pages to other people for free, the person that used the copier is breaking the law, not the company that made the copier.

    • Vent@lemm.ee
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      6
      ·
      1 year ago

      They didn’t pay the writers though, that’s the whole point

      • lily33@lemm.ee
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        2
        ·
        edit-2
        1 year ago

        True - but I don’t think the goal here is to establish that AI companies must purchase 1 copy of each book they use. Rather, the point seems to be that they should need separate, special permission for AI training.

        • PupBiru@kbin.social
          link
          fedilink
          arrow-up
          3
          arrow-down
          1
          ·
          1 year ago

          100% this! there are separate licenses for personal listening, public performance, use in another work (movie and TV)… there will likely be a license added for AI training to which some authors will opt into, some will opt out of… it’ll likely start very expensive, nobody will pay, someone will offer up
          old works that aren’t selling well for bargain basement prices, make a killing, then others will see the success and slowly prices will follow and eventually prices will sit at a happy medium where AI companies can tolerate and copyright holders aren’t feeling screwed… well, i mean, they’ll be being screwed but their publishers will be making bank

          that’s my totally out of thin air prediction anyway

        • BURN@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          1 year ago

          I believe this is where it’ll inevitably go. However I’m not sure it’ll be just AI, rather hopefully more protections around individual creative work and how that can be used by corporations for internal or external data collection.

          This really does depend on privacy laws as well and probably data collection, retention and usage too.

        • alias@artemis.camp
          link
          fedilink
          arrow-up
          2
          arrow-down
          2
          ·
          1 year ago

          That probably depends a lot on definitions of terms of legalese, but there should be a law explicitly for this in every civilised country.

    • Unaware7013@kbin.social
      link
      fedilink
      arrow-up
      4
      arrow-down
      5
      ·
      1 year ago

      A pen manufacturer isn’t repurposing other peoples’ work to make their pens.
      A computer manufacturer has to license the intellectual property that they use to make their computers.