Using model-generated content in training causes irreversible defects, a team of researchers says. “The tails of the original content distribution disappears,” writes co-author Ross Anderson from the University of Cambridge in a blog post. “Within a few generations, text becomes garbage, as Gaussian distributions converge and may even become delta functions.”

Here’s is the study: http://web.archive.org/web/20230614184632/https://arxiv.org/abs/2305.17493

  • artificial_unintelligence@programming.dev
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    I think it’s not a hard stop but it is an issue. I think it will force models to be trained in more novel ways, rather than just purely pump more data in. I think ideally we’d be able to reach GPT level intelligence on fractions of the data and compute. These new techniques have yet to be made but this will put pressure on their creation

    • Pigeon@beehaw.org
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 year ago

      I think that’s a tremendously tall order. The current LLM’s are straightforwardly Large Language Models that have zero ability to understand the language and only sort it based on statistical models that can only be gleaned via a vast heap of data. Reducing the size of any data set increases the likelihood of bias and blindspots no matter what you do.

      At the least, an LLM cannot talk about anything (like news events, new inventions, new political ideas) until humans have talked about it first AND their talking about it has been put into the dataset. If something’s not in the dataset, an LLM simply can’t invent it. At absolute best, it’ll spit out plausible-sounding bullshit.

      Inventing actual, truely intelligent AI is a project very far remove from what we have now. It’d take the invention of entirely different systems, not at all just an iterative improvement of an LLM.