• lugal
      link
      fedilink
      arrow-up
      1
      ·
      10 months ago

      But does it work to tell it not to hallucinate? And does it work the other way around too?

      • orca@orcas.enjoying.yachts
        link
        fedilink
        arrow-up
        2
        ·
        10 months ago

        It’s honestly a gamble based on my experience. Instructions that I’ve given ChatGPT have worked for a while, only to be mysteriously abandoned for no valid reason. Telling AI not to hallucinate is apparently common practice from the research I’ve done.