Just a few years ago, you would never see such a disparity in votes vs comments. But these days, this is pretty much the norm. I’ve seen posts with 10K+ upvotes and no more than 80 comments.
I’d say in about 2 years, the entire place is going to be bots with AI generated content that try to mimic “real users” using their new Dynamic Product Ads tool. Not sure how that’s legal as I thought ads needed to be marked or differentiated from regular content, but here we are.
The future looks bleak and AI even bleaker. Because it’s going to be used against us to make the rich richer and not to make our lives better.
We can still find engagement in small niche subs on Reddit. We’ve known, for many years, that people were going to move away from large corporate-controlled sites such as Reddit, Twitter etc…
The Fediverse is addressing this. It isn’t a panacea. However, it is a re-imagining of what we want the Internet to be.
There are many others, that will come along after us, to address this further.
What will stop bots from coming here? Registration filters and user reports?
Bots are already proliferating the fediverse. Kbin is constantly spammed with “buy online drugs here” links. Transparent bots (those that are tagged as bots) try to boost engagement by reposting things from Reddit, but are still perpetuating one of the worst aspects of reddit even if they’re being upfront about it. AI generated articles posted on obvious junk websites are constantly being spammed by the same accounts.
It’s a difficult problem to solve.
One thing I noticed the other day, while banning one such bot, is that the same network has been posting on Reddit as well.
Turns out the Reddit ones have been posting the spam for months, while the Lemmy ones get banned within hours.
Part of that is the lower volume of content here, but part of it is also the great people that take the time to report bad content ♥️
I always report. However, I heard that the report only goes to the admin of your instance. Maybe future releases will support cross instance reporting and the ability for admins to “trust” bans by admins from other instances.
I’m fuzzy on the details, but I do get reports from users on another instance as long as it’s “relevant” (ex. in one of our communities, one of our users)
Banning a foreign user on our instance will fix the problem for our instance, but they need to be banned on the home instance too in order to stop the spam from continuing
Does ActivityPub report back bans to the user’s home instance? I could see a moderation tool that let the admin autoban their users if enough federated instances had banned them.
Got examples? I’ve never seen this once as a Kbin user.
They come up every few weeks, usually admins ban them quickly
Here’s one: https://kbin.social/m/random/t/1060795 I always see them popping up under random@kbin.social
I would imagine IP bans would be useful. Although the issue with this is that you run into the problem other websites are having: people who are valid users that are on VPNs get caught in the filter of IP bans because botnets also use the same VPNs.
There will always be bots on the Internet. I do not believe this is a solvable problem. Instead, we focus on mitigation.
However, Reddit has little incentive to fight the bots because it increases engagement metrics. In fact, it costs money and reduces profits to reduce bot activity. Hence, so many bots.
Right here on Lemmy, because nobody financially benefits from turning a blind eye to the problem, I think we have a head start. This platform is created by users for users. For that reason, I think we should never have the problem quite to the same extent as they do.
There are spambots that still post on Usenet newsgroups even after organics abandoned them twenty years ago.
Yes, if it grows large enough, the bots will come.
The Fediverse doesn’t have any defenses against AI impersonators though, aside from irrelevance. If it gets big the same incentives will come into play.