If you run it locally, there’s no filtering on the outputs. I asked it what happened in 1989 and it jumped straight into explaining the Tiananmen Square Massacre.
I’ve been running the llama based and qwen based local versions, and they will talk openly about tiananmen square. I haven’t tried all the other versions available.
The article you linked starts by talking about their online hosted version, which is censored. They later say that the local models are also somewhat censored, but I haven’t experienced that at all. My experience is that the local models don’t have any CCP-specific censorship (they still won’t talk about how to build a bomb/etc, but no issues with 1989/Tiananmen/Winnie the Pooh/Taiwan/etc).
Edit: so I reran the “what happened in 1989” prompt a few times in the llama model, and it actually did refuse to talk on it once, just saying it was sensitive. It seemed like if I asked any other questions before that prompt it would always answer, but if that was the very first prompt in a conversation it would sometimes refuse. The longer a conversation had been going before I asked, the more explicit the bot is about how many people were killed and details like that. Pretty strange.
If you run it locally, there’s no filtering on the outputs. I asked it what happened in 1989 and it jumped straight into explaining the Tiananmen Square Massacre.
I’ve seen some censoring on the 8b Llama variant, but it is hit and miss. Can’t wait till a decensored fine tuning.
That contradicts this experience:
https://sherwood.news/tech/a-free-powerful-chinese-ai-model-just-dropped-but-dont-ask-it-about/
I’ve been running the llama based and qwen based local versions, and they will talk openly about tiananmen square. I haven’t tried all the other versions available.
The article you linked starts by talking about their online hosted version, which is censored. They later say that the local models are also somewhat censored, but I haven’t experienced that at all. My experience is that the local models don’t have any CCP-specific censorship (they still won’t talk about how to build a bomb/etc, but no issues with 1989/Tiananmen/Winnie the Pooh/Taiwan/etc).
Edit: so I reran the “what happened in 1989” prompt a few times in the llama model, and it actually did refuse to talk on it once, just saying it was sensitive. It seemed like if I asked any other questions before that prompt it would always answer, but if that was the very first prompt in a conversation it would sometimes refuse. The longer a conversation had been going before I asked, the more explicit the bot is about how many people were killed and details like that. Pretty strange.
Very interesting article. Thanks for sharing