The Chinese model has chain of thought that u can see. The model when asked to talk about chinas atrocities will go through a chain of though process outlining all the atrocities then conclude its not allowed to tell u. Cool technology tho I’m just waiting for a dolphin fine tuning.
I’ve been playing around with the offline version of the model. It’s interesting, but I think we’ll have to wait for people to tinker with the open source base for awhile before we get something really great.
If you run it locally, there’s no filtering on the outputs. I asked it what happened in 1989 and it jumped straight into explaining the Tiananmen Square Massacre.
I’ve been running the llama based and qwen based local versions, and they will talk openly about tiananmen square. I haven’t tried all the other versions available.
The article you linked starts by talking about their online hosted version, which is censored. They later say that the local models are also somewhat censored, but I haven’t experienced that at all. My experience is that the local models don’t have any CCP-specific censorship (they still won’t talk about how to build a bomb/etc, but no issues with 1989/Tiananmen/Winnie the Pooh/Taiwan/etc).
Edit: so I reran the “what happened in 1989” prompt a few times in the llama model, and it actually did refuse to talk on it once, just saying it was sensitive. It seemed like if I asked any other questions before that prompt it would always answer, but if that was the very first prompt in a conversation it would sometimes refuse. The longer a conversation had been going before I asked, the more explicit the bot is about how many people were killed and details like that. Pretty strange.
The Chinese model has chain of thought that u can see. The model when asked to talk about chinas atrocities will go through a chain of though process outlining all the atrocities then conclude its not allowed to tell u. Cool technology tho I’m just waiting for a dolphin fine tuning.
I’ve been playing around with the offline version of the model. It’s interesting, but I think we’ll have to wait for people to tinker with the open source base for awhile before we get something really great.
If you run it locally, there’s no filtering on the outputs. I asked it what happened in 1989 and it jumped straight into explaining the Tiananmen Square Massacre.
I’ve seen some censoring on the 8b Llama variant, but it is hit and miss. Can’t wait till a decensored fine tuning.
That contradicts this experience:
https://sherwood.news/tech/a-free-powerful-chinese-ai-model-just-dropped-but-dont-ask-it-about/
I’ve been running the llama based and qwen based local versions, and they will talk openly about tiananmen square. I haven’t tried all the other versions available.
The article you linked starts by talking about their online hosted version, which is censored. They later say that the local models are also somewhat censored, but I haven’t experienced that at all. My experience is that the local models don’t have any CCP-specific censorship (they still won’t talk about how to build a bomb/etc, but no issues with 1989/Tiananmen/Winnie the Pooh/Taiwan/etc).
Edit: so I reran the “what happened in 1989” prompt a few times in the llama model, and it actually did refuse to talk on it once, just saying it was sensitive. It seemed like if I asked any other questions before that prompt it would always answer, but if that was the very first prompt in a conversation it would sometimes refuse. The longer a conversation had been going before I asked, the more explicit the bot is about how many people were killed and details like that. Pretty strange.
Very interesting article. Thanks for sharing
I’m using the 8b model and it’s having no problem telling me about China’s atrocities.