• eric@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    9 months ago

    But humans make works that are similar to other works all the time. I just hope that we set the same standards for AI violating copyright as we have for humans. There is a big difference between derivative works and those that violate copyright.

    • lemmyvore@feddit.nl
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      9 months ago

      Doesn’t this argument assume that AI are human? That’s a pretty huge reach if you ask me. It’s not even clear if LLM are AI, nevermind giving them human rights.

      • eric@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        9 months ago

        No, I’m not assuming that. It’s not about concluding AI’s are human. It’s about having concrete standards on which to design laws. Setting a lower standard for copyright violation by LLMs would be like setting a lower speed limit for a self-driving car, and I don’t think it makes any logical sense. To me that would be a disappointingly protectionist and luddite perspective to apply to this new technology.

        • lemmyvore@feddit.nl
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          2
          ·
          9 months ago

          If LLM are software then they can’t commit copyright violation, the onus for breaking laws falls on the people who use them. And until someone proves otherwise in a court of law they are software.

          • eric@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            9 months ago

            No one is saying we charge a piece of software with a crime. Corporations aren’t human, but they can absolutely be charged with copyright violations, so being human isn’t a requirement for this at all.

            Depending on the situation, you would either charge the user of the software (if they directed the software to violate copyright) and/or the company that makes the software (if they negligently release an LLM that has been proven to produce results that violate copyright).

      • Saganastic@kbin.social
        link
        fedilink
        arrow-up
        3
        ·
        9 months ago

        Machine learning falls under the category of AI. I agree that works produced by LLMs should count as derivative works, as long as they’re not too similar.

        • nybble41@programming.dev
          link
          fedilink
          English
          arrow-up
          2
          ·
          8 months ago

          Not every work produced by a LLM should count as a derivative work—just the ones that embody unique, identifiable creative elements from specific work(s) in the training set. We don’t consider every work produced by a human to be a derivative work of everything they were trained on; work produced by (a human using) an AI should be no different.