• MelodiousFunk@slrpnk.net
        link
        fedilink
        English
        arrow-up
        47
        ·
        edit-2
        8 months ago

        He’s got to get them from somewhere. They certainly aren’t coming from his little piggy brain.

      • Hubi@lemmy.world
        link
        fedilink
        English
        arrow-up
        21
        ·
        8 months ago

        Reddit is past the point of no return. He might as well speed it up a little.

      • paraphrand@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        8 months ago

        Like a built in brand dashboard where brands can monitor keywords for their brand and their competitors? And then deploy their sanctioned set of accounts to reply and make strategic product recommendations?

        Sounds like something that must already exist. But it would have been killed or hampered by API changes… so now Spez has a chance to bring it in-house.

        They will just call it brand image management. And claim that there are so many negative users online that this is the only way to fight misinformation about their brand.

        Or something. It’s all so tiring.

      • FinishingDutch@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        8 months ago

        Probably.

        So, we complain to a regulatory body, they investigate, they tell a company to do better or, waaaay down the road, attempt to levy a fine. Which most companies happily pay, since the profits from he shady business practices tend to far outweigh the fines.

        Legal or illegal really only means something when dealing with an actual person. Can’t put a corporation in jail, sadly.

  • IninewCrow@lemmy.ca
    link
    fedilink
    English
    arrow-up
    86
    arrow-down
    1
    ·
    8 months ago

    Doesn’t mean that the fediverse is immune.

    News stories and narratives are still fought over by actors on all sides and sometimes by entities that might be bots. And there are a lot of auto-generating content bots that post stuff or repost old content from other sites like Reddit.

    • AggressivelyPassive@feddit.de
      link
      fedilink
      English
      arrow-up
      26
      arrow-down
      3
      ·
      8 months ago

      Especially since being immune to censorship is kind of the point of the fediverse.

      If you’re even a tiny bit smart about it, you can start hundreds of sock puppet instances and flood other instances with bullshit.

      • WolfdadCigarette@threads.net@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        41
        arrow-down
        2
        ·
        edit-2
        8 months ago

        I try to avoid talking about how indefensibly terrible Lemmy’s anti-spam and anti-brigading measures are for fear of someone doing something with the information. I imagine the only thing keeping subtle disinfo and spam from completely overtaking Lemmy is how small its reach would be. Doing the same thing to Reddit is a hundred times more effective, and systemically accepted. Reddit’s admins like engagement.

        • IninewCrow@lemmy.ca
          link
          fedilink
          English
          arrow-up
          17
          ·
          8 months ago

          It’s an arms race and Lemmy is only a small player right now so no one really pays attention to our little corner. But as soon as we get past a certain threshold, we’ll be dealing with the same problems as well.

        • MysticKetchup@lemmy.world
          link
          fedilink
          English
          arrow-up
          9
          ·
          8 months ago

          I feel the same about a lot of Fediverse apps right now. They’re kinda just coasting on the fact that they’re not big enough for most spammers to care about. But they need to put in solid defenses and moderation tools before that happens

            • ✺roguetrick✺@lemmy.world
              link
              fedilink
              English
              arrow-up
              7
              arrow-down
              2
              ·
              8 months ago

              Meta will likely actually moderate against spambots because they want you to fucking pay them for that service. The problem is, they aren’t too interested in moderating hate speech.

              • nickwitha_k (he/him)@lemmy.sdf.org
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                8 months ago

                So, you’re suggesting that it is better that they are profiting from helping state actors and hate groups?

                Edit: No, they are not suggesting that. I misunderstood their meaning.

                • ✺roguetrick✺@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  edit-2
                  8 months ago

                  I don’t think I made a value statement whatsoever. I think calling it a problem and hate speech would’ve been enough of a clue as to how I felt about it, however.

                  It’s actually why I support most instances defederating from them

      • old_machine_breaking_apart@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 months ago

        Can’t some instances make some sort of agreement and have a whitelist of instances to not block? People would need to register to add their instances to the list, and some common measures would be applied to restrict someone from registering several instances at once, and banning people who misuse the system.

        That wouldn’t solve the problem, but perhaps would make things more manageable.

        • AggressivelyPassive@feddit.de
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          8 months ago

          You can’t block people. Who would you know, who registered the domain?

          What you’re proposing is pretty similar to the current state of email. It’s almost impossible to set up your own small mail server and have it communicate the “mailiverse” since everyone will just assume you’re spam. And that lead to a situation where 99% of people are with one of the huge mail providers.

            • AggressivelyPassive@feddit.de
              link
              fedilink
              English
              arrow-up
              3
              ·
              8 months ago

              It’s extremely complicated and I don’t really see a solution.

              You’d need gigantic resources and trust in those resources to vet accounts, comments, instances. Or very in depth verification processes, which in turn would limit privacy.

              What I actually found interesting was bluesky’s invite system. Each user got a limited number of invite links and if a certain amount of your invitees were banned, you’d be banned/flagged to. That creates a web of trust, but of course also makes anonymous accounts impossible.

  • kingthrillgore@lemmy.ml
    link
    fedilink
    English
    arrow-up
    76
    arrow-down
    2
    ·
    8 months ago

    Generative AI has really become a poison. It’ll be worse once the generative AI is trained on its own output.

    • Simon@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      36
      arrow-down
      2
      ·
      edit-2
      8 months ago

      Here’s my prediction. Over the next couple decades the internet is going to be so saturated with fake shit and fake people, it’ll become impossible to use effectively, like cable television. After this happens for a while, someone is going to create a fast private internet, like a whole new protocol, and it’s going to require ID verification (fortunately automated by AI) to use. Your name, age, and country and state are all public to everybody else and embedded into the protocol.

      The new ‘humans only’ internet will be the new streaming and eventually it’ll take over the web (until they eventually figure out how to ruin that too). In the meantime, they’ll continue to exploit the infested hellscape internet because everybody’s grandma and grampa are still on it.

        • rottingleaf@lemmy.zip
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          8 months ago

          Yup. I have my own prediction - that humanity will finally understand the wisdom of PGP web of trust, and using that for friend-to-friend networks over Internet. After all, you can exchange public keys via scanning QR codes, it’s very intuitive now.

          That would be cool. No bots. Unfortunately, corps, govs and other such mythical demons really want to be able to automate influencing public opinion. So this won’t happen until the potential of the Web for such influence is sucked dry. That is, until nobody in their right mind would use it.

      • Baylahoo@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        10
        ·
        8 months ago

        That sounds very reasonable as a prediction. I could see it being a pretty interesting black mirror episode. I would love it to stay as fiction though.

      • k110111@feddit.de
        link
        fedilink
        English
        arrow-up
        2
        ·
        8 months ago

        New models already train on synthetic data. It’s already a solved solution.

          • k110111@feddit.de
            link
            fedilink
            English
            arrow-up
            1
            ·
            8 months ago

            All the latest models are trained on synthetic data generated on got4. Even the newer versions of gpt4. Openai realized it too late and had to edit their license after Claude was launched. Human generated data could only get us so far, recent phi 3 models which managed to perform very very well for their respective size (3b parameters) can only achieve this feat because of synthetic data generated by AI.

            I didn’t read the paper you mentioned, but recent LLM have progressed a lot in not just benchmarks but also when evaluated by real humans.

  • tearsintherain@leminal.space
    link
    fedilink
    English
    arrow-up
    61
    ·
    edit-2
    8 months ago

    So the human shills that already destroyed good faith in forums and online communities over time are now being fully outsourced to AI. Amazon itself a prime source of enshittification. From fake reviews to everyone with a webpage having affiliate links trying to sell you some shit or other. Including news outlets. Turned everyone into a salesperson.

  • ColeSloth@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    59
    arrow-down
    4
    ·
    8 months ago

    I called this shit out like a year ago. It’s the end of any viable online searching having much truth to it. All we’ll have left is youtube videos from project farm to trust.

    • Debs@lemmy.zip
      link
      fedilink
      English
      arrow-up
      36
      ·
      edit-2
      8 months ago

      It kinda seems like the end of the Google era. What will we search Google for when the results are all crap? This is the death gasps of the internet I/we grew up with.

      • Hugh_Jeggs@lemm.ee
        link
        fedilink
        English
        arrow-up
        32
        arrow-down
        1
        ·
        8 months ago

        Remember when you could type a vague plot of a film you’d heard about into Google and it’d be the first result?

        Nah doesn’t work anymore

        Saw a trailer for a french film so I searched “french film 2024 boys live in woods seven years”

        Google - 2024 BEST FRENCH FILMS/TOP TEN FRENCH FILMS YOU MUST SEE THIS YEAR/ALL TIME BEST FRENCH MOVIES

        Absolute fucking gash

        I’ve not been too impressed with Kagi search, but at least the top result there was “Frères 2024”

        • EatATaco@lemm.ee
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          edit-2
          8 months ago

          Remember when you could type a vague plot of a film you’d heard about into Google and it’d be the first result?

          I honestly don’t remember this at all. I remember priding myself on my “google-fu” and how to search it to get what i, or other people, needed. Which usually required understanding the precise language that you would need to use, not something vague. But over the years it’s gotten harder and harder, and now I get frustrated with how hard it has become to find something useful. I’ve had to go back to finding places I trust for information and looking through them.

          Although, ironically, I can do what you’re talking about with ai now.

      • Wiz@midwest.social
        link
        fedilink
        English
        arrow-up
        18
        arrow-down
        1
        ·
        8 months ago

        Maybe web rings of the 90s were not such a bad idea! Let’s bring 'em back!

      • rottingleaf@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        8 months ago

        I’m feeling myself old and I’m 28.

        Cause in my early childhood in 2003-2007 we would resort to search engines only when we couldn’t find something by better (but more manual and social) means.

        Because - mwahahaha - most of the results were machine-generated crap.

        So I actually feel very uplift due to people promising the Web to get back to norm in this sense.

    • BurningnnTree@lemmy.one
      link
      fedilink
      English
      arrow-up
      21
      arrow-down
      1
      ·
      8 months ago

      I ran into this issue while researching standing desks recently. There are very few places on the internet where you can find verifiably human-written comparisons between standing desk brands. Comments on Reddit all seem to be written by bots or people affiliated with the brands. Luckily I managed to find a YouTube reviewer who did some real comparisons.

  • istanbullu@lemmy.ml
    link
    fedilink
    English
    arrow-up
    42
    arrow-down
    5
    ·
    8 months ago

    You don’t get to blame AI for this. Reddit was already overrun by corporate and US gov trolls long before AI.

    • TheFriar@lemm.ee
      link
      fedilink
      English
      arrow-up
      20
      ·
      edit-2
      8 months ago

      “New poison has been added to arsenic. Should you stop drinking it? Subscribe to find out.”

    • Rinox@feddit.it
      link
      fedilink
      English
      arrow-up
      11
      ·
      8 months ago

      The problem is the magnitude, but yeah, even before 2020 Google was becoming shit and being overrun by shitty blogspam trying to sell you stuff with articles clearly written by machines. The only difference is that it was easier to spot and harder to do. But they did it anyway

      • rottingleaf@lemmy.zip
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        8 months ago

        These things became shit around 2009. Or immediately after becoming sufficiently popular to press out LiveJournal and other such (the original Web 2.0, or maybe Web 1.9 one should call them) platforms.

        What does this have to do with search engines - well, when they existed alongside web directories and other alternative, more social and manual ways of finding information, you’d just go to that if search engines would become too direct in promotion and hiding what they don’t want you to see. You’d be able to compare one to another and feel that Google works bad in this case. You wouldn’t be influenced in the end result.

        Now when what Google gives you became the criterion for what you’re supposed to associate with such a request, and same for social media, then it was decided.

  • funn@lemy.lol
    link
    fedilink
    English
    arrow-up
    37
    arrow-down
    1
    ·
    8 months ago

    I don’t understand how Lemmy/Mastodon will handle similar problems. Spammers crafting fake accounts to give AI generated comments for promotions

    • FeelThePower@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      26
      arrow-down
      2
      ·
      8 months ago

      The only thing we reasonably have is security through obscurity. We are something bigger than a forum but smaller than Reddit, in terms of active user size. If such a thing were to happen here, mods could handle it more easily probably (like when we had the spammer of the Japanese text back then), but if it were to happen on a larger scale than what we have it would be harder to deal with.

      • old_machine_breaking_apart@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        12
        arrow-down
        1
        ·
        8 months ago

        There’s one advantage on the fediverse. We don’t have the corporations like reddit manipulating our feeds, censoring what they dislike, and promoting shit. This alone makes using the fediverse worth for me.

        When it comes to problems involving the users themselves, things aren’t that different, and we don’t have much to do.

        • MinFapper@lemmy.world
          link
          fedilink
          English
          arrow-up
          22
          ·
          8 months ago

          We don’t have corporations manipulating our feeds

          yet. Once we have enough users that it’s worth their effort to target, the bullshit will absolutely come.

          • old_machine_breaking_apart@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            9
            ·
            8 months ago

            they can perhaps create instances, pay malicious users, try some embrace, extend, extinguish approach or something, but they can’t manipulate the code running on the instances we use, so they can’t have direct power over it. Or am I missing something? I’m new to the fediverse.

          • bitfucker@programming.dev
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            1
            ·
            8 months ago

            Federation means if you are federated then sure you get some BS. Otherwise, business as usual. Now, making sure there is no paid user or corporate bot is another matter entirely since it relies on instance moderators.

        • deweydecibel@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          8 months ago

          We don’t have the corporations like reddit manipulating our feeds, censoring what they dislike, and promoting shit.

          Corporations aren’t the only ones with incentives to do that. Reddit was very hands off for a good long while, but don’t expect that same neutral mentality from fediverse admins.

      • linearchaos@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        8 months ago

        I think the real danger here is subtlety. What happens when somebody asks for recommendations on a printer, or complains about their printer being bad, and all of a sudden some long established account recommends a product they’ve been happy with for years. And it turns out it’s just an AI bot shilling for brother.

        • deweydecibel@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          8 months ago

          For one, well established brands have less incentives to engage in this.

          Second, in this example, the account in question being a “long established user” would seem to indicate you think these spam companies are going to be playing a long game. They won’t. That’s too much effort and too expensive. They will do all of this on the cheap, and it will be very obvious.

          This is not some sophisticated infiltration operation with cutting edge AI. This is just auto generated spam in a new upgraded form. We will learn to catch it, like we’ve learned to catch it before.

          • linearchaos@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            8 months ago

            I mean, it doesn’t have to be expensive. And also doesn’t have to be particularly cutting edge. Start throwing some credits into an LLM API, haven’t randomly read and help people out in different groups. Once it reaches some amount of reputation have it quietly shill for them. Pull out posts that contain keywords. Have the AI consume the posts and figure out if they have to do with what they sound like they do. Have it subtly do product placement. None of this is particularly difficult or groundbreaking. But it could help shape our buying habits.

    • deweydecibel@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      8 months ago

      The same way it’s handled on Reddit: moderators.

      Some will get through and sit for a few days but eventually the account will make itself obvious and get removed.

      It’s not exactly difficult to spot these things. If an account is spending the majority of its existence on a social media site talking about products, even if they add some AI generated bullshit here and there to make it seem like it’s a regular person, it’s still pretty obvious.

      If the account seems to show up pretty regularly in threads to suggest the same things, there’s an indicator right there.

      Hell, you can effectively bait them by making a post asking for suggestions on things.

      They also just tend to have pretty predictable styles of speak, and never fail to post the URL with their suggestion.

    • Aabbcc@lemm.ee
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      8
      ·
      8 months ago

      Ai is a tool. It can be used for good and it can be used for poison. Just because you see it being used for poison more often doesn’t mean you should be against ai. Maybe lay the blame on the people using it for poison

  • nytrixus@lemmy.world
    link
    fedilink
    English
    arrow-up
    36
    arrow-down
    3
    ·
    8 months ago

    Correction - AI is poisoning everything when it is not regulated and moderated.

    Reddit has been poisoning itself for a while, what’s the difference? Just AI borrowing from the shithead behavior?

      • Crikeste@lemm.ee
        link
        fedilink
        English
        arrow-up
        24
        arrow-down
        1
        ·
        8 months ago

        Lol, you think allowing people and businesses to do whatever the fuck they want is a good thing.

      • UnderpantsWeevil@lemmy.world
        link
        fedilink
        English
        arrow-up
        16
        arrow-down
        2
        ·
        8 months ago

        The regulations we implement are written by the Sam Bankman Frieds and Elon Musks who can capture the regulatory agencies. The moderation is itself increasingly automated, for the purpose of inflating perceived quality and quantity of interactions on the website.

        Get back to a low-population IIRC or Discord server, a small social media channel, or a… idfk… Lemmy instance? Suddenly regulation and moderation by, of, and for the user base starts looking much nicer.

  • PrincessLeiasCat@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    30
    ·
    8 months ago

    The creator of the company, Alexander Belogubov, has also posted screenshots of other bot-controlled accounts responding all over Reddit. Begolubov has another startup called “Stealth Marketing” that also seeks to manipulate the platform by promising to “turn Reddit into a steady stream of customers for your startup.” Belogubov did not respond to requests for comment.

    What an absolute piece of shit. Just a general trash person to even think of this concept.

  • laverabe@lemmy.world
    link
    fedilink
    English
    arrow-up
    28
    arrow-down
    1
    ·
    8 months ago

    I just consider any comment after Jun 2023 to be compromised. Anyone who stayed after that date either doesn’t have a clue, or is sponsored content.

  • vegaquake@lemmy.world
    link
    fedilink
    English
    arrow-up
    21
    ·
    8 months ago

    yeah, the internet is doomed to be unusable if AI just keeps getting more insidious like this

    yet more companies tie themselves to online platforns, websites, and other models of operation depending on being always connected.

    maybe the world needs a reboot, just get rid of it all and start from scratch

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      1
      ·
      8 months ago

      maybe the world needs a reboot, just get rid of it all and start from scratch

      That would destroy all the old good vintage stuff and leave us with machines that immediately fill the vacant space with pure trash.

      • vegaquake@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        8 months ago

        rapture but with technology would be pretty funny

        save the good old stuff and burn the rest

  • sirspate@lemmy.ca
    link
    fedilink
    English
    arrow-up
    13
    ·
    edit-2
    8 months ago

    If the rumor is true that a reddit/google training deal is what led to reddit getting boosted in search results, this would be a direct result of reddit’s own actions.

  • CazzoBuco@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    1
    ·
    8 months ago

    When the internet is eventually oversaturated with smartbots, where will the humans go.

  • catch22@programming.dev
    link
    fedilink
    English
    arrow-up
    11
    ·
    8 months ago

    This is a direct consequence of Google targeting Reddit posts in its search results. Hopefully forum groups like Lemmy don’t go get buried under a mountain of garbage as well. As long as advertisers are able to destroy public forums and communities with ads, with ad based revenue sites like Google directing who to target. We will always be creating something great while constantly trying to keep advertisers from turning it into a pile of crap.

    • NeptuneOrbit@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      8 months ago

      The history of TV, in reverse. And then forward again.

      At first, it was an impossibly expensive medium rules by a cartel of agencies and advertisers. Eventually, HBO comes along and shows you don’t have to just make a bunch of lowest common denominator drivel.

      Netflix eventually shows that the internet can be a way cheaper model than cable. Finally, money shows up in the streaming model, remaking advertiser friendly cable in the internet age. All in about 2.5 decades.