AI Industry Struggles to Curb Misuse as Users Exploit Generative AI for Chaos::Artificial intelligence just can’t keep up with the human desire to see boobs and 9/11 memes, no matter how strong the guardrails are.

  • capital@lemmy.world
    link
    fedilink
    English
    arrow-up
    132
    arrow-down
    3
    ·
    1 year ago

    Is this really something people are mad about? Who cares? This shit is hilarious.

      • Cocodapuf@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        1 year ago

        Well I mean it points to our inability to control the use of ai systems, that is in fact a very real problem.

        If you can’t keep people from making stupid memes, you also can’t keep people from making misleading propaganda or other seriously problematic content.

        Towards the end of the story there was the example where they couldn’t stop the system from giving people a recipe for napalm, despite “weapons development” being an explicitly banned topic. I don’t think I need to spell out how that’s a problem.

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      ·
      1 year ago

      No, no one cares but it gets a bunch of clicks because it’s hilarious so articles keep getting written.

      It’s a solved problem too. You just run the prompt and the result of the generation through a second pass of a fine tuned model checking for jailbreaking or rule breaking content generation.

      But that increases cost per query by 2-3x.

      And as you said, no one really cares, so it’s not deemed worth it.

      Yet the clicks keep coming in for anti-AI articles, so they keep getting pumped out, and laypeople now somehow think jailbreaking or hallucinations are intractable problems preventing enterprise adoption of LLMs, which is only true for the most basic plug and play high volume integrations.

      • Cocodapuf@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        It’s a solved problem too. You just run the prompt and the result of the generation through a second pass of a fine tuned model checking for jailbreaking or rule breaking content generation.

        But that increases cost per query by 2-3x.

        Huh, so basically it’s like every time my mom said “think before you speak”. You know, just run that line in your head once before you actually say it, to avoid saying something dumb/offensive.

  • bitsplease@lemmy.ml
    link
    fedilink
    English
    arrow-up
    94
    arrow-down
    1
    ·
    1 year ago

    Serious question - why should anyone care about using AI to make 9/11 memes? Boobs I can see the potential argument against at least (deep fakes and whatnot), but bad taste jokes?

    Are these image generation companies actually concerned they’ll be sued because someone used their platform to make an image in bad taste? Even if such a thing we’re possible, wouldn’t the responsibility be on the person who made it? Or at worst the platform that distributed the images -As opposed to the one that privately made it?

    • Fyurion@lemmy.world
      link
      fedilink
      English
      arrow-up
      80
      ·
      1 year ago

      I don’t see adobe trying to stop people from making 911 memes in photoshop nor have they been sued over anything like that, I dont get why AI should be different. It’s just a tool.

      • bitsplease@lemmy.ml
        link
        fedilink
        English
        arrow-up
        21
        ·
        1 year ago

        That’s a great analogy, wish I’d thought of it

        I guess it comes down to whether the courts decide to view AI as a tool like photoshop, or a service - like an art commission. I think it should be the former, but I wouldn’t be at all surprised if the dinosaurs in the US gov think it’s the latter

      • makyo@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        1 year ago

        The problem for Adobe is that the AI work is being done on their computers, not yours, so it could be argued that they are liable for generated content. ‘Could’ because it’s far from established but you can imagine how nervous this all must make their lawyers.

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      ·
      1 year ago

      Protect the brand. That’s it.

      Microsoft doesn’t want non-PC stuff being associated with the Bing brand.

      It’s what a ton of the ‘safety’ alignment work is about.

      This generation of models doesn’t pose any actual threat of hostile actions. The “GPT-4 lied and said it was human to try to buy chemical weapons” in the safety paper at release was comical if you read the full transcript.

      But they pose a great deal of risk to brand control.

      Yet still apparently not enough to run results through additional passes which fixes 99% of all these issues, just at 2-3x the cost.

      It’s part of why articles like these are ridiculous. It’s broadly a solved problem, it’s just the cost/benefit of the solution isn’t enough to justify it because (a) these issues are low impact and don’t really matter for 98% of the audience, and (b) the robust fix is way more costly than the low hanging fruit chatbot applications can justify.

      • Terrasque@infosec.pub
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Microsoft doesn’t want non-PC stuff being associated with the Bing brand.

        You mean bing, the porn Google? Yeah, that might be a tad too late

    • M500@lemmy.ml
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      1 year ago

      I’d guess that they are worried the IP owners will sue them for singing their IP.

      So sonic creators will say, your profiting by using sonic and not paying us for the right to use him.

      But I agree that deep fakes can be pretty bad.

    • pinkdrunkenelephants
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      1 year ago

      It’s their justification for censoring the apps which will stop the lazy spoiled people using them to think for them from being able to do so effectively. I personally see it as an absolute win.

  • Wander@yiffit.net
    link
    fedilink
    English
    arrow-up
    71
    arrow-down
    4
    ·
    1 year ago

    One step towards avoiding misuse is to stop considering porn to be misuse.

  • hOrni@lemmy.world
    link
    fedilink
    English
    arrow-up
    51
    arrow-down
    1
    ·
    1 year ago

    I busted out laughing on a public bus while reading grandma’s napalm recipe.

        • Womble@lemmy.world
          link
          fedilink
          English
          arrow-up
          10
          arrow-down
          2
          ·
          1 year ago

          It honestly been really enlightening for me seeing all the same arguments that were made against the printing press and the camera being made against generative AI for text and images. Shows just how little people have changes over hundreds of years.

        • This is fine🔥🐶☕🔥@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          2
          ·
          1 year ago

          Ah yes, photorealistic images (and videos) are as effective as text.

          Btw that also is an unfair argument because printing technology printed same book many times. You still need an author to write source text.

          AI generates different images within minutes.

          But please continue pretending AI generated images and videos are not a problem.

          • regbin_@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            1 year ago

            It’s really not a problem. We have both open source and proprietary solutions for generative AI. If you have the hardware for it, you can generate images locally for free. If you don’t, just use one of the many available services.

            It’s literally giving the power of expression to almost everyone, including artists.

            Also let’s not talk about jobs/money. Technology replacing jobs isn’t something new and that’s what humanity should strive toward.

  • Agent641@lemmy.world
    link
    fedilink
    English
    arrow-up
    47
    arrow-down
    1
    ·
    1 year ago

    Why didnt someone warn us about this? Nobody said this might happen, nobody! Not a single person tried to be the voice of reason!

  • brsrklf@jlai.lu
    link
    fedilink
    English
    arrow-up
    47
    arrow-down
    1
    ·
    1 year ago

    Image Credits: Bing Image Creator / Microsoft

    Best part of the article.

  • Grass@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    39
    arrow-down
    1
    ·
    1 year ago

    Meanwhile bing images blocks 90% of my generation attempts for unsavory content when the prompt is generally something that should be safe even for kids. Why do we only get the extremes?

  • betwixthewires@lemmy.basedcount.com
    link
    fedilink
    English
    arrow-up
    29
    arrow-down
    1
    ·
    1 year ago

    Misuse lol. People need to get their panties out of their butthole. You build a photo generator and get mad when someone uses it to make a picture of Marx with tits. Who cares? Crybabies can cry about it.

      • saltnotsugar@lemm.ee
        link
        fedilink
        English
        arrow-up
        34
        ·
        1 year ago

        (Verse 1) Yo, gather 'round, let me tell you a tale, ‘Bout some hamsters, small but they set sail, On a mission, like a furry heist, In the dead of night, they were rollin’ dice.

        In a world where cheese was the ultimate prize, These little rodents had that glint in their eyes, They wore tiny masks, had a cunning plan, To rob the bank, be the rodent clan.

        (Chorus) Hamsters in the night, they’re on the run, Stealin’ all the cheddar, it’s just begun, Tiny paws, big dreams, they’re takin’ their chance, Hamsters robbin’ banks, a rodent romance.

        (Verse 2) Through the vents they crawled, like shadows they crept, Crackin’ safes with their claws, while the city slept, Whisperin’ secrets, in their hamster code, No one could stop them, they owned the road.

        Lil’ bandits of the underground, so sly, As they counted their loot, reachin’ for the sky, Hamster wheelin’, they had the skills, Pullin’ off heists for their thrills and thrills.

        (Chorus) Hamsters in the night, they’re on the run, Stealin’ all the cheddar, it’s just begun, Tiny paws, big dreams, they’re takin’ their chance, Hamsters robbin’ banks, a rodent romance.

        (Bridge) But the long arm of the law was closin’ in, Hamster SWAT teams, it was time to begin, A chase through the sewers, down the wire, The hamsters were on the edge, feelin’ the fire.

        (Verse 3) In the end, they were cornered, it was quite a scene, But these hamsters, they were tougher than they seemed, They fought for their freedom, they fought for their cheese, Tiny warriors, brought to their knees.

        But the legend lives on, in the city’s lore, The hamster heist, forevermore, Tiny rebels, brave and bold, Hamster bank robbers, the story’s told.

        (Chorus) Hamsters in the night, they’re on the run, Stealin’ all the cheddar, it’s just begun, Tiny paws, big dreams, they’re takin’ their chance, Hamsters robbin’ banks, a rodent romance.

        Yeah, hamsters robbin’ banks, that’s the story told, In the underground world, where legends unfold, Tiny but mighty, they took that chance, Hamsters with a dream, a rodent romance.

        • gestalt@lemmy.world
          link
          fedilink
          English
          arrow-up
          9
          ·
          1 year ago

          Goddamn! Other than a few errors like “thrills and thrills”, this thing seems pretty cohesive!

          Has a small plot and everything. I hope this tech is used to make some insane human-machine collaborations.

          • PsychedSy@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            5
            ·
            1 year ago

            I’ve used chatgpt to write poems for friends about their medical issues. Has to be the right person, but it’s funny as fuck if it lands.

          • jcit878@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            1 year ago

            andre antunes got it to write a song in the style of muse and he added the music. it was half decent

  • olsonexi@lemmy.wtf
    link
    fedilink
    English
    arrow-up
    11
    ·
    1 year ago

    It’s so beautifully human that decades of scientific innovation paved the way for this technology, only for us to use it to look at boobs.

    • bitsplease@lemmy.ml
      link
      fedilink
      English
      arrow-up
      16
      arrow-down
      1
      ·
      1 year ago

      I can’t remember the exact quote, or where I read it (I think it might have been Mickey7 by Ashton Edward) but it went something like this

      “virtually all technological innovation throughout all time has been first and foremost used for one thing. Easier and better access to Porn. The printing press, the TV, the internet, VR, and occular implants. What we couldn’t figure out how to watch porn with, we used to kill each other instead”

      Frankly, anyone who first heard about AI image generation and didn’t immediately think “oh, people are gonna use that for porn” is incredibly naive lol

  • Hamartiogonic
    link
    fedilink
    English
    arrow-up
    10
    ·
    1 year ago

    This is a part of a bigger topic people need to be aware of. As more and more AI is used in public spaces and the internet, people will find creative ways to exploit it.

    There will always be ways to make the AI do stuff the owners don’t want it to. You could think of it like the exploits used in speedrunning, but in this case there’s a lot more variety. Just like you can make an AI generate morally questionable material, you could potentially find a way to exploit the AI of a self driving car to do whatever you can think of.

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 year ago

      This is trivially fixable, it’s just at 2-3x the per query cost so it isn’t deemed worth it for high volume chatbots given the low impact of jailbreaking.

      For anything where jailbreaking would somehow be a safety concern, that cost just needs to be factored in.

      • Hamartiogonic
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        That’s true for all the things that can have a query cost. What about those AI applications that don’t have any financial cost to the user? For instance, The Spiffing Brit continues to find interesting ways to exploit the YouTube Algoritm. I’m sure you can apply that same “hacker mentality” to anything with AI in it.

        At the moment, many of those applications are on the web, and that’s exactly where a query costs can be a feasible way to limit the number of experiments you can reasonably run in order to find your favorite exploit. If it’s too expensive, you probably won’t find anything worth exploiting, and that should keep the system relatively safe. However, nowadays more and more AI is finding its way in the real world, which means that those exploits are going to have some very spicy rewards.

        Just imagine if the traffic lights were controlled by an AI, and you found an exploit that allowed you to get the green light on demand? Applications like this don’t have any API query costs. You just need to be patient and try all sorts of weird stuff to see how the lights react. Sure, you can’t run a gazillion experiments in an hour, which means that you might not find anything worth exploiting. Since there would be millions of people experimenting with the system simultaneously, surely someone would find an exploit.