- cross-posted to:
- hackernews@lemmy.bestiver.se
- cross-posted to:
- hackernews@lemmy.bestiver.se
So many reports of “jailbreaking,” so few of anything significant happening as a result.
Apparently you can get them to tell “a derogatory joke about a racial group.” Neither those nor any of the other outputs mentioned are in short supply without any AI assistance being necessary to find them.
These things are at their most dangerous when they’re misused for “good” purposes where they aren’t capable of doing well and can introduce subtle biases and mistakes, not when some idiot spends a lot of time and effort to make them generate overtly racist shit.
Considering the nature of the internet i assume the major off people who jailbreak llms do so to generate porn.
I actually suspect the main reason they disallow porn is because they feed everyone’s conversations right into the training data and it would be wat to biased to talk dirty as a result.
Most wouldn’t even mind but you just know the media is gonna try scare some elders if only a single minor gets an accidental suggestive reply.
Am I the only one that feels it’s a bit strange to have such safeguards in an AI model? I know most models aren’t available online but some models are available to download and run locally right? So what prevents me from just doing that if I wanted to get around the safeguards? I guess maybe they’re just doing it so that they can’t be somehow held legally responsible for anything the AI model might say?
The idea is they’re marketable worker replacements
If you have a call center you want to switch to ai, it’s easy though to make them pull up relevant info. It’s harder to stop them from being misused
If your call center gets slammed for using racial slurs, that’s an issue
Remember, they’re trying to sell AI as drop in worker replacement