I wouldn’t be surprised if the ban was a pretext and the sub was just something admins found objectionable for their own reasons. Like as long as mods remove material and users when an issue is brought to their attention then the sub should be fine.
The fact they don’t know why it happened is telling that they weren’t given a real chance to correct the issue. Just centralised social media things I guess.
Reddit is notorious to responding to financial incentives. In the past they would ban communities only when they became toxic to advertisers due to overwhelming negative publicity. During those purges, they would often throw in some leftist subs to prevent the user-base political average from shifting leftward, but the purges were never proactive.
I think we’ve entered a new era where Reddit is no longer as concerned about which subs may scare advertisers, and are more concerned about which subs generate the kind of content that is valuable to LLM training. If I were training the next version of ChatGPT, I would be alarmed if a text prompt spontaneously invited me to masturbate with it, or prompts for images of a “battle station” resulted in walls of women having sex.
The article is about reddit banning a community for no real reason with no option for recourse. My issue was I thought that was kind of an over the top description.
What?
I had to stop reading this shit. I can’t believe there’s an article about it. I guess 404 can go ahead and go bankrupt, fuck off.
Edit, to be fair, it was a quote from a Vice article.
Just some gooning with the bois.
I wouldn’t be surprised if the ban was a pretext and the sub was just something admins found objectionable for their own reasons. Like as long as mods remove material and users when an issue is brought to their attention then the sub should be fine.
The fact they don’t know why it happened is telling that they weren’t given a real chance to correct the issue. Just centralised social media things I guess.
Reddit is notorious to responding to financial incentives. In the past they would ban communities only when they became toxic to advertisers due to overwhelming negative publicity. During those purges, they would often throw in some leftist subs to prevent the user-base political average from shifting leftward, but the purges were never proactive.
I think we’ve entered a new era where Reddit is no longer as concerned about which subs may scare advertisers, and are more concerned about which subs generate the kind of content that is valuable to LLM training. If I were training the next version of ChatGPT, I would be alarmed if a text prompt spontaneously invited me to masturbate with it, or prompts for images of a “battle station” resulted in walls of women having sex.
It seems like they’re worse about it now that they’ve IPOed. Or maybe that was just in the lead up to the IPO.
I would hope that people training AI models would be selective about which subs to include or exclude.
Probably Splez thinks we should be saving our speem and also tanning our testicles like Tucker Carson.
You read an article about gooning and you’re upset that you’ve now learned what gooning is???
The article is about reddit banning a community for no real reason with no option for recourse. My issue was I thought that was kind of an over the top description.