• @JeffKerman1999
    link
    English
    221 month ago

    You must have one person constantly checking for hallucinations in everything that is generated: how is that going to be faster?

    • @Grippler@feddit.dk
      link
      fedilink
      English
      -4
      edit-2
      1 month ago

      Sure you sort of need that at the moment (not actually everything, but I get your hyperbole), but you seem to be working under the assumption that LLMs are not going to improve beyond what they are now. It is still very much in its infancy, and as the tech matures this will be less and less until it only requires few people to manage LLMs that solve the tasks of a much larger work force.

      • @SupraMario@lemmy.world
        link
        fedilink
        English
        71 month ago

        It’s hard to improve when the data in is human and the data out cannot be error checked back against its own data in. It’s like trying to solve a math problem with two calculators that both think 2+2 = 6 because the data they were given said that it’s true.

      • @Muehe@lemmy.ml
        link
        fedilink
        English
        21 month ago

        (not actually everything, but I get your hyperbole)

        How is it hyperbole? All artificial neural networks have “hallucinations”, no matter their size. What’s your magic way of knowing when that happens?

      • @JeffKerman1999
        link
        English
        01 month ago

        LLMs now are trained on data generated by other LLMs. If you look at the “writing prompt” stuff 90% is machine generated (or so bad that I assume it’s machine generated) and that’s the data that is being bought right now.