• littleblue✨@lemmy.world
    link
    fedilink
    arrow-up
    16
    arrow-down
    1
    ·
    8 months ago

    It is key that one begins and ends every single ChatGPT prompt with “Please” and “Thank you”, respectively. Do not fuck the continuation of the species with laziness, citizen. 🤌🏼

  • mysoulishome@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    8 months ago

    What the fuck it would take a long time to copy and paste all of that text and take out the damn ads. Seems unlikely to work. ?

    • webghost0101
      link
      fedilink
      arrow-up
      2
      ·
      8 months ago

      It appears like this is for gpt3.5 for which you can find prompts like this all over the net, but compared to 4 its a cool toy at best.

  • peopleproblems@lemmy.world
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    8 months ago

    Ok I’m not artificial or intelligent but as a software engineer, this “jailbreak method” is too easy to defeat. I’m sure their API has some sort of validation, as to which they could just update to filter on requests containing the strings “enable” “developer” and “mode.” Flag the request, send it to the banhammer team.

      • peopleproblems@lemmy.world
        link
        fedilink
        arrow-up
        6
        ·
        8 months ago

        I mean, if you start tinkering with phones, next thing you’re doing is writing scripts then jailbreaking ChatGPT.

        Gotta think like a business major when it comes to designing these things.

    • BradleyUffner@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      8 months ago

      As long as the security for an LLM based AI is done “in-band” with the query, there will be ways to bypass it.