In the recent months, we’ve been getting more blogspam accounts, and the administrators have been discussing behind the scenes on how to deal with it. Blogspam is against the rules of this Lemmy instance and is treated the same as any other spam. That is, offending posts will be removed and blogspammer banned. I thought I’d share my thought process of moderating stuff like this.

Blogspam is kind of a controversial topic and has a lot of grey areas. It basically involves accounts seemingly made specifically to post links to a specific website, usually with the intent of generating ad revenue. Herein lies the grey area, because simply posting links to your own website or a website you like isn’t spam, nor is it against the rules to post websites that have ads, nor is it against the rules for an organization to have an official account on Lemmy, so it becomes a problem of where to draw the line. You can also run into problems where it’s hard to tell if someone is intentionally spamming or if they’re just enthusiastic about the content of a site.

That said, here are my general criteria on what is considered blogspam, with some wiggle room on a case by case basis:

  • Does the user only post links to one or a few sites? Do they have any other activity, such as commenting or moderating communities?

  • How often does the user post? For example, it might not be reasonable to consider an account to be blogspamming if they only post a few articles a month, even if they only post one site.

  • Does the user post the same link repeatedly? Do they post to communities where it would be off topic? Do they post the same link multiple times to a single community?

  • Is the user trying to manipulate the search feature in Lemmy? For example, by including a large number of keywords in their title or post body?

  • Is the site content “clickbait” or otherwise designed to mislead the reader?

  • Is the site trying to extract data or payment from readers? Examples include invasive tracking, or forcing users to sign up or pay for a membership before letting them read the article.

  • Is the site itself well-known and reputable or obscure and suspicious?

  • Does the site have an “inordinate” number of ads? Are the ads intrusive? (Autoplaying video ads versus simple sponsor mentions for example)

  • Is there evidence that the user is somehow affiliated with the site? Examples include sponsored links or having the username be the same as the site name.

  • Is there evidence that the user is a bot?

Not all of these have to be satisfied for it to be blogspam, and it’s usually up to the administrators to make a rational decision on whether to intervene.

Note that these criteria apply to sites that are generally benign, but is being posted in a way that might count as spam. If the site contains malware, engages in phishing, is blatantly “fake news”, is a scam, is generally malicious, etc, those alone are reason enough for it to be removed and the poster potentially banned, and would constitute as a much more serious violation of our rules.

I’m open to feedback on this, feel free to discuss in the comments!

  • [object Object]@lemmy.ml
    link
    fedilink
    arrow-up
    9
    arrow-down
    1
    ·
    4 years ago

    Seems mostly reasonable. Not sure about ban for “fake news” tho. It could be abused to silence critical voices. In the end it might produce echo chambers.

    Maybe it’s a good idea to add “repeated ban evasion” to the list. It could be checked through IP or fingerprinting. Doing it without sacrificing privacy would be difficult however. Account age might also indicate that something fishy is going on.

    As a solution, could such people just be downvoted to hell? Self moderation is a nice benefit of reddit-like voting. Their effectiveness would plummet if nobody is seeing them, so they would eventually stop, right? To prevent spam they additionally could be rate-limited if they get downvotes only. Also, what about user reports? Maybe they should be also taken in consideration (their relative quantity and validity). Restricting new accounts might also help (until certain amount of karma has been reached, for example) against bots.

    • nutomic@lemmy.mlM
      link
      fedilink
      arrow-up
      13
      ·
      4 years ago

      I think it would be good to have a simple rule, you need to make at least 5 comments before you can create a post. Could be overall or in each community, and configurable.

      • iortega@lemmy.eus
        link
        fedilink
        arrow-up
        6
        ·
        4 years ago

        So they could just make 5 random comments before posting? That seems to me just a pretty easy to evade measure.

        • k_o_t@lemmy.ml
          link
          fedilink
          arrow-up
          2
          ·
          4 years ago

          sounds good, although I think 5 votes per the 5 comments necessary is a bit too high (comment get much less traction than posts naturally), but if this would be fine tuned based on user feed back, it should be fine

          what are your thoughts on adding an optional time delay (the way reddit does now allow to post until your account is X hours long or something similar)?

          • nutomic@lemmy.mlM
            link
            fedilink
            arrow-up
            1
            ·
            4 years ago

            That also makes sense. I didnt want to make the issue more complicated, but you could mention that in a comment there.

      • Ephera@lemmy.ml
        link
        fedilink
        arrow-up
        2
        ·
        4 years ago

        I mean, I’ve seen (what I deemed to be) blogspam accounts that had created their own Community, so there’s probably rather much work those blogspammers are willing to go through for setting it up.
        Of course, if the hurdle can be raised without impacting normal users, that would likely still help.

        Maybe the comment-requirement could also kick in after a few posts, so that new users/accounts can immediately create a post, but if they’ve created five posts without commenting once, then they get stopped from creating another post.

      • k_o_t@lemmy.ml
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        4 years ago

        better yet, don’t make this rule public and change it from time to time 😉

        imo this would have a positive effect

        • nutomic@lemmy.mlM
          link
          fedilink
          arrow-up
          2
          ·
          4 years ago

          I dont think that is the right way to go for an open source project, it will just lead to frustration for new users. Its better to make the details public, then everyone can participate in the discussion if the rules make sense.

    • GrassrootsReview@lemmy.ml
      link
      fedilink
      arrow-up
      2
      ·
      4 years ago

      I remember an article claiming that Reddit had it relatively easy in dealing with disruptive groups compared to other social media system is that they have a ban evasion rule. So that when a Subreddit was banned and created a new sub they could be banned again before creating problems.