AI Industry Struggles to Curb Misuse as Users Exploit Generative AI for Chaos::Artificial intelligence just can’t keep up with the human desire to see boobs and 9/11 memes, no matter how strong the guardrails are.

  • capital@lemmy.world
    link
    fedilink
    English
    arrow-up
    132
    arrow-down
    3
    ·
    1 year ago

    Is this really something people are mad about? Who cares? This shit is hilarious.

      • Cocodapuf@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        1 year ago

        Well I mean it points to our inability to control the use of ai systems, that is in fact a very real problem.

        If you can’t keep people from making stupid memes, you also can’t keep people from making misleading propaganda or other seriously problematic content.

        Towards the end of the story there was the example where they couldn’t stop the system from giving people a recipe for napalm, despite “weapons development” being an explicitly banned topic. I don’t think I need to spell out how that’s a problem.

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      ·
      1 year ago

      No, no one cares but it gets a bunch of clicks because it’s hilarious so articles keep getting written.

      It’s a solved problem too. You just run the prompt and the result of the generation through a second pass of a fine tuned model checking for jailbreaking or rule breaking content generation.

      But that increases cost per query by 2-3x.

      And as you said, no one really cares, so it’s not deemed worth it.

      Yet the clicks keep coming in for anti-AI articles, so they keep getting pumped out, and laypeople now somehow think jailbreaking or hallucinations are intractable problems preventing enterprise adoption of LLMs, which is only true for the most basic plug and play high volume integrations.

      • Cocodapuf@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        It’s a solved problem too. You just run the prompt and the result of the generation through a second pass of a fine tuned model checking for jailbreaking or rule breaking content generation.

        But that increases cost per query by 2-3x.

        Huh, so basically it’s like every time my mom said “think before you speak”. You know, just run that line in your head once before you actually say it, to avoid saying something dumb/offensive.