I’ve recently noticed this opinion seems unpopular, at least on Lemmy.

There is nothing wrong with downloading public data and doing statistical analysis on it, which is pretty much what these ML models do. They are not redistributing other peoples’ works (well, sometimes they do, unintentionally, and safeguards to prevent this are usually built-in). The training data is generally much, much larger than the model sizes, so it is generally not possible for the models to reconstruct random specific works. They are not creating derivative works, in the legal sense, because they do not copy and modify the original works; they generate “new” content based on probabilities.

My opinion on the subject is pretty much in agreement with this document from the EFF: https://www.eff.org/document/eff-two-pager-ai

I understand the hate for companies using data you would reasonably expect would be private. I understand hate for purposely over-fitting the model on data to reproduce people’s “likeness.” I understand the hate for AI generated shit (because it is shit). I really don’t understand where all this hate for using public data for building a “statistical” model to “learn” general patterns is coming from.

I can also understand the anxiety people may feel, if they believe all the AI hype, that it will eliminate jobs. I don’t think AI is going to be able to directly replace people any time soon. It will probably improve productivity (with stuff like background-removers, better autocomplete, etc), which might eliminate some jobs, but that’s really just a problem with capitalism, and productivity increases are generally considered good.

  • wildncrazyguy138@fedia.io
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    edit-2
    5 months ago

    To expand on what you wrote, I’d equate the LLM output as similar to me reading a book. From here on out until I become senile, the book is part of memory. I may reference it, I may parrot some of its details that I can remember to a friend. My own conversational style and future works may even be impacted by it, perhaps even subconsciously.

    In other words, it’s not as if a book enters my brain and then is completely gone once I’m finished reading it.

    So I suppose then, that the question is moreso one of volume. How many works consumed are considered too many? At what point do we shift from the realm of research to the one of profiteering?

    There are a certain subset of people in the AI field who believe that our brains are biological forms of LLMs, and that, if we feed an electronic LLM enough data, it’ll essentially become sentient. That may be for better or worse to civilization, but I’m not one to get in the way of wonder building.

    • Hamartiogonic
      link
      fedilink
      English
      arrow-up
      3
      ·
      5 months ago

      A neural network (the machine learning technology) aims to imitate the function to normal neurons in a human brain. If you have lots of these neurons, all sorts of interesting phenomena begin to emerge, and consciousness might be one of them. If/when we get to that point, we’ll also have to address several of legal and philosophical questions. It’s going to be a wild ride.