• Treemaster099@pawb.social
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    1 year ago

    I don’t really have the time to look for timestamps, but he does present his arguments from many different angles. I highly recommend watching the whole thing if you can.

    Aside from that, the main thing I want to address is the responsibility of these big corporations to curate the massive library of content they gather. It’s entirely in their power to blacklist certain things like PII or sensitive information or hate speech, but they decided not to because it was cheaper. They took a gamble that people either wouldn’t care, didn’t have the resources to fight it, or would actively support their theft if it meant getting a new toy to play with.

    Now that there’s a chance they could lose a massive amount of money, this could deter other ai companies from flagrantly breaking the law and set a better standard that protects people’s personal data. Tbh I don’t really think this specific case has much ground to stand on, but it’s the first step in securing more safety for people online. Imagine if the database for this ai was leaked. Imagine all of the personal data, yours and mine included, that would be available to malicious people. Imagine the damage that could cause.

    • archomrade [he/him]@midwest.social
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      They do curate the data somewhat, though it’s not easy to verify if they did since they don’t share their data set (likely because they expect legal challenge)

      There’s no evidence they have “personal data” beyond direct textual data scraped from platforms such as reddit (much of which is disembodied from other metadata). I care FAR more about data google, facebook, or microsoft has leaking than I do text written on my old reddit or twitter account, and somehow we’re not wringing our hands about that data collection.

      I watched most of that video, and i’m frankly not moved by much of it. The video seems primarily (if not entirely) written in response to generative image models and image data that may actually be protected under existing copywrite, unlike the textual data in question in this particular lawsuit. Despite that, I think his interpretation of “derivative work” hand waving is flimsy at best, and relies on a materialist perspective that I just can’t identify with (a pragmatic framework might be more persuasive to me). A case-by-case basis of copywrite infringement of the use of AI tools is the most solid argument he makes, but I am just not persuaded that all AI is theft based on publicly accessible data being used as training data. And i just don’t think copywrite law is an ideal solution to a growing problem with technological automation and ever increasing levels of productivity and stagnating levels of demand.

      I’m open to being wrong, but i think copywrite law doesn’t address the long-term problems introduced by AI and is instead a shortcut to maintaining a status quo destined to failure regardless.