As if EA didn’t already make bland, derivative games…

  • 🇰 🌀 🇱 🇦 🇳 🇦 🇰 ℹ️@yiffit.net
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    13
    ·
    edit-2
    8 months ago

    You can control what it spits out, though. They already do somewhat.

    Edit: Gonna go out on a limb and assume most of you haven’t actually played any of the projects currently doing this. Or mess with chatbots at all.

    • moriquende@lemmy.world
      link
      fedilink
      arrow-up
      20
      arrow-down
      1
      ·
      8 months ago

      Somewhat is key. You can try to guide it in a direction, but that’s it. Also, as a player, you can never be sure if the dialogue is meaningful or not. Does it reveal something about the plot? Is it a key information about the character? Is it just hallucinated gibberish to fill the space?

      • Jesus_666@lemmy.world
        link
        fedilink
        arrow-up
        5
        ·
        8 months ago

        Besides, LLMs struggle with retaining contextual information for long and they’re pretty dang resource hungry. Expect a game with LLM-driven dialogue to reserve several gigs of VRAM and a fair chunk of GPU processing power solely for that.

        And then you still get characters who hallucinate plot points or suddenly speak gibberish.

    • MentalEdge
      link
      fedilink
      arrow-up
      4
      arrow-down
      1
      ·
      edit-2
      8 months ago

      You really can’t.

      You can run checks and fence it in with traditional software, you can train it more narrowly…

      I haven’t seen anything that suggests AI hallucinations are actually a solvable problem, because they stem from the fact that these models don’t actually think, or know anything.

      They’re only useful when their output is vetted before use, because training a model that gets things 100% right 100% of the time, is like capturing lightning in a bottle.

      It’s the 90/90 problem. Except with AI it’s looking more and more like a 90/99.99999999 problem.