I know there are other ways of accomplishing that, but this might be a convenient way of doing it. I’m wondering though if Reddit is still reverting these changes?

  • Lvxferre@mander.xyz
    link
    fedilink
    English
    arrow-up
    90
    arrow-down
    1
    ·
    8 months ago

    Let’s pretend for a moment that we know that Reddit has any sort of decent versioning system, and that it keeps the old versions of your comments alongside the newer ones, and that it’s feeding the LLM with the old version. (Does it? I have my doubts, given that Reddit Inc. isn’t exactly competent.)

    Even then, I think that it’s sensible to use this tool, to scorch the earth and discourage other human users from adding their own content to that platform. It still means less data for Google to say “it’s a bunch of users, who cares about the intellectual property of those filthy things? Their data is now my data. Feed it to the wolves to Gemini”.

    • T156@lemmy.world
      link
      fedilink
      English
      arrow-up
      32
      arrow-down
      1
      ·
      edit-2
      8 months ago

      Let’s pretend for a moment that we know that Reddit has any sort of decent versioning system, and that it keeps the old versions of your comments alongside the newer ones, and that it’s feeding the LLM with the old version. (Does it? I have my doubts, given that Reddit Inc. isn’t exactly competent.)

      They almost certainly do, if only because of the practicalities of adding a new comment, then having that be fetched in place of the old one, compared to making and propagating an edit across all their databases. With exceptions, it’d be a bit easier to implement it as an additional comment, and increment a version number that you fetch the latest version of, rather than needing to scan through the entire database to make changes.

      It would also help with any administration/moderation tasks if they could see whether people posted rule-breaking content and then tried to hide it behind edits.

      That said, one of the many Spez controversies did show that they are capable of making actual edits on the back end if they wished.

      • Lvxferre@mander.xyz
        link
        fedilink
        English
        arrow-up
        21
        ·
        8 months ago

        They almost certainly do, if only because of the practicalities of adding a new comment

        If this is true, it shifts the problem from “not having it” to “not knowing which version should be used” (to train the LLM).

        They could feed it the unedited versions and call it a day, but a lot of times people edit their content to correct it or add further info, specially for “meatier” content (like tutorials). So there’s still some value on the edits, and I believe that Google will be at least tempted to use them.

        If that’s correct, editing it with nonsense will lower the value of edited comments for the sake of LLM training. It should have an impact, just not as big as if they kept no version system.

        It would also help with any administration/moderation tasks if they could see whether people posted rule-breaking content and then tried to hide it behind edits.

        I know from experience (I’m a former Reddit janny) that moderators can’t see earlier versions of the content, only the last one. The admins might though.

        That said, one of the many Spez controversies did show that they are capable of making actual edits on the back end if they wished.

        The one from TD, right?

        • spez: “let them babble their violent rhetoric. Freeze peaches!”
        • also spez: “nooo they’re casting me on a bad light. I’m going to edit it!”
        • londos@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          8 months ago

          Honestly, parsing through version history is actually something an LLM could handle. It might even make more sense of it than without. For example, if someone replies to a comment and then the parent is edited to say something different. No one will have to waste their time filtering anything.

          • Lvxferre@mander.xyz
            link
            fedilink
            English
            arrow-up
            3
            ·
            edit-2
            8 months ago

            They could use an LLM to parse through the version history of all those posts/comments, to use it to train another LLM with it. It sounds like a bad (and expensive, processing time-wise) idea, but it could be done.

            EDIT: thinking further on this, it’s actually fairly doable. It’s generally a bad idea to feed the output of an LLM into another, but in this case you’re simply using it to pick one among multiple versions of a post/comment made by a human being.

            It’s still worth to scorch the earth though, so other human users don’t bother with the platform.

        • GBU_28@lemm.ee
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          2
          ·
          8 months ago

          Wouldn’t be hard to scan a user and say:

          • they existed for 5 years.
          • they made something like 5 comments a day. They edit 1 or 2 comments a month.
          • then randomly on March 7th 2024 they edited 100% of all comments across all subs.
          • use comment version March 6th 2024
          • Lvxferre@mander.xyz
            link
            fedilink
            English
            arrow-up
            7
            arrow-down
            1
            ·
            8 months ago

            It would.

            First you’d need to notice the problem. Does Google even realise that some people want to edit their Reddit content to boycott LLM training?

            Let’s say that Google did it. Then it’d need to come up with a good (generalisable, low amount of false positives, low amount of false negatives) set of rules to sort those out. And while coming up with “random” rules is easy, good ones take testing, trial and error, and time.

            But let’s say that Google still does it. Now it’s retrieving and processing a lot more info from the database than just the content and its context, but also account age, when the piece of content was submitted, when it was edited.

            So doing it still increases the costs associated with the corpus, making it less desirable.

            • GBU_28@lemm.ee
              link
              fedilink
              English
              arrow-up
              3
              ·
              8 months ago

              Huh? Reddit has all of this plus changes in their own DBs. Google has nothing to do with this, it’s pre handover.

              • Lvxferre@mander.xyz
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                2
                ·
                edit-2
                8 months ago

                I’m highlighting that having the data is not enough, if you don’t find a good way to use the data to sort the trash out. Google will need to do it, not Reddit; Reddit is only handing the data over.

                Is this clear now? If you’re still struggling to understand it, refer to the context provided by the comment chain, including your own comments.

                • GBU_28@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  1
                  ·
                  edit-2
                  8 months ago

                  I’m saying reddit will not ship a trashed deliverable. Guaranteed.

                  Reddit will have already preprocessed for this type of data damage. This is basic data engineering and trivial to do to find events in the data and understanding timeseries of events.

                  Google will be receiving data that is uncorrupted, because they’ll get data properly versioned to before the damaging event.

                  If a high edit event happens on March 7th, they’ll ship march 7th - 1d. Guaranteed.

                  Edit to be clear: you’re ignoring/not accepting the practice of noting high volume of edits per user as an event, and using that timestamped event as a signal of data validity.

          • Voroxpete@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            ·
            8 months ago

            It sounds like what’s needed here is a version of this tool that makes the edits slowly, at random intervals, over a period of time. And perhaps has the ability to randomize the text in each edit so that they’re all unusable garbage, but different unusable garbage (like the suggestion of taking ChatGPT output at really high temp that someone else made). Maybe it also only edits something like 25% of your total comment pool, and perhaps makes unnoticeably minor edits (add a space, remove a comma) to a whole bunch of other comments. Basically masking the poison by hiding it in a lot of noise?

            • GBU_28@lemm.ee
              link
              fedilink
              English
              arrow-up
              2
              ·
              edit-2
              8 months ago

              Now you’re talkin .

              Intra comment edit threshold would be fun to explore

    • reksas
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      1
      ·
      8 months ago

      What if we edit the comments slowly, words or even letters at a time. Then, if they save all of the edits they will end up with a lot of pointless versions. And if they dont, the buffer will eventually get full and original gets lost

      • Lvxferre@mander.xyz
        link
        fedilink
        English
        arrow-up
        6
        ·
        8 months ago

        I’ll ping @lemmyvore@feddit.nl because the answer is relevant for both.

        Another user mentioned the possibility that they could use an LLM to sort this shit out. If that’s correct neither slow edits nor multiple edits will do much, as the LLM could simply pick the best version of each comment.

        And while it’s a bit silly to use LLM to sort data out to train another LLM, this sounds like the sort of shit that Google could and would do.

    • chalupapocalypse@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      8 months ago

      Let’s also pretend that reddit isn’t a cesspool of bots, marketing campaigns, foreign agents, incels, racists, Republicans, gun nuts, shit posters, trolls…the list goes on.

      Is it even that valuable? It didn’t take long for that Microsoft bot to turn into Hitler, feeding reddit into an “AI” is like speed running Ultron.

      • Lvxferre@mander.xyz
        link
        fedilink
        English
        arrow-up
        4
        ·
        8 months ago

        It’s still somewhat valuable due to the size of the corpus (it’s huge) and because people used to share technical expertise there.

    • lemmyvore@feddit.nl
      link
      fedilink
      English
      arrow-up
      7
      ·
      8 months ago

      Even if they had comment versioning, who’s gonna dig through the versions to figure out which are nonsense. Just use the overwrite tool several times and then wish them good luck.

  • Limeey@lemmy.world
    link
    fedilink
    English
    arrow-up
    62
    arrow-down
    9
    ·
    8 months ago

    When you edit your comment all you’re doing is adding a “new” comment, the old comment is flagged to not show and the new comment shows in its place.

    This achieves nothing.

    • abhibeckert@lemmy.world
      link
      fedilink
      English
      arrow-up
      69
      arrow-down
      5
      ·
      8 months ago

      Reddit was open source until relatively recently. According to the source code, editing comments does overwrite your data. Or at least it used to.

      Keeping old data is expensive, and usually a waste of money.

      • magic_lobster_party@kbin.run
        link
        fedilink
        arrow-up
        32
        arrow-down
        1
        ·
        8 months ago

        It’s not a waste of money if you can sell it.

        And text comments is rarely more than 1kb. They can provably fit more than 1 billion comments in a 1TB drive if they want, which is peanuts in terms of storage.

      • T156@lemmy.world
        link
        fedilink
        English
        arrow-up
        25
        ·
        edit-2
        8 months ago

        Relatively recently being 6 years ago.

        Keeping old data is expensive, and usually a waste of money.

        At the same time, text, which Reddit was exclusively, for a good long time, compresses really well. The entirety of Wikipedia goes from 10 TB to 100 GB when compressed, and if it’s just the article text alone, 22 GB.

        That’s a drop in the bucket compared to the amount of data that they would have had to deal with when they started deciding to take on video and image hosting.

      • SorteKanin@feddit.dk
        link
        fedilink
        English
        arrow-up
        16
        arrow-down
        1
        ·
        8 months ago

        Text data is like practically 0 compared to all the rest of the data (i.e. images for instance).

      • ColeSloth@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        2
        ·
        8 months ago

        Keeping old comments data is small and relatively cheap to store. I’m sure they’ve kept backups. Probably even yearly ones for the past 5 years. Storage for text really doesn’t take up much room. There’s over 4,500,000,000 words in the entirety of Wikipedia. You can download it all right now if you’d like. An offline copy of wiki is currently about 95GB. Probably half the size of your last CoD game update.

    • Imgonnatrythis@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      8 months ago

      I’m still pretty happy that I can change all my comments to quips from story of the eye or jaberwalky and I would encourage everyone to do the same. Seems like a good fuck around and find out situation at least. There will likely be other llms that won’t have an official relationship but will crawl reddit. The more we can jumble it up the better.

  • Olap@lemmy.world
    link
    fedilink
    English
    arrow-up
    54
    arrow-down
    15
    ·
    8 months ago

    Reddit is almost certainly going to throw your old comments to them if you edit stuff. We’re pretty fucked. And if you think Lemmy is any different, guess again. We agreed to send our comments to everyone else in the fediverse, plenty of bad actors and a legal minefield allows LLMs to do what they want essentially. The good news is that LLMs are all crap, and people are slowly realising this

    • SorteKanin@feddit.dk
      link
      fedilink
      English
      arrow-up
      51
      arrow-down
      1
      ·
      8 months ago

      And if you think Lemmy is any different, guess again

      Lemmy is different, in that the data is not being sold to anyone. Instead, the data is available to anyone.

      It’s kind of like open source software. Nobody can buy it, cause it’s open and free to be used by anyone. Nobody profits off of it more than anyone else - nobody has an advantage over anyone else.

      Open source levels the playing field by making useful code available to everyone. You can think of comments and posts on the Fediverse in the same way - nobody can buy that data, because it’s open and free to be used by anyone. Nobody profits off of it more than anyone else and nobody has an advantage over anyone else (after all, everyone has access to the same data).

      The only problem is if you’re okay with your data being out there and available in this way… but if you’re not, you probably shouldn’t be on the internet at all.

      • tabular@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        8 months ago

        If the post is creative then it’s automatically copyrighted in many countries. That doesn’t stop people collecting it and using it to train ML (yet).

        • asret@lemmy.zip
          link
          fedilink
          English
          arrow-up
          2
          ·
          8 months ago

          Copyright has little to say in regards to training models - it’s the published output that matters.

    • kernelle@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      2
      ·
      edit-2
      8 months ago

      LLMs are all crap, and people are slowly realising this

      LLM’s have already changed the tech space more than anything else for the last 10 years at least. I get what you’re trying to say but that opinion will age like milk.

      Edit: made wording clearer

    • maegul (he/they)@lemmy.ml
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      3
      ·
      8 months ago

      I’ve been harping on about this for a while on the fediverse … private/closed/non-open spaces really ought to be thought about more. Fortunately, lemmy core devs are implementing local only and private communities (local only is already done IIRC).

      Yes they introduce their own problems with discovery and gating etc. But now that the internet’s “you’re the product” stakes have gone beyond what could have been construed as a reasonably transaction, “my attention on an ad … for a service”, to “my mind’s products to be aggregated into an energy sucking job replacing AI … for a service” … well it’s time to normalise closing that door on opportunistic tech capitalists.

    • TORFdot0@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      8 months ago

      LLMs are great for anything you’d trust to an 8 year old savant.

      It’s great for getting quick snippets of code using languages and methods that have great documentation. I don’t think I’d trust it for real work though

    • my_hat_stinks@programming.dev
      link
      fedilink
      English
      arrow-up
      2
      ·
      8 months ago

      They’ll use old comments either way, using an up-to-date dataset means using a dataset already tainted by LLM-generated content. Training a model on its own output is not great.

      Incidentally this also makes Lemmy data less valuable, most of Lemmy’s popularity came after the rise of LLMs so there’s no significant untainted data from before LLMs.

  • OutrageousUmpire@lemmy.world
    link
    fedilink
    English
    arrow-up
    27
    arrow-down
    1
    ·
    8 months ago

    If one wanted to really screw the AI, I’d replace each post/comment with nonsense generated by ChatGPT itself on a higher-than-normal temperature setting. AI would be training on its own generated content, and out of context as well.

  • ramble81@lemm.ee
    link
    fedilink
    English
    arrow-up
    29
    arrow-down
    4
    ·
    8 months ago
    1. Reddit will most likely feed these guys a copy of their DB from before the API switch ensuring an unfucked copy of data before people started messing with it.

    2. The only way to control your data, even on the fediverse is through DRM, the thing so many people hate, but it’s designed to ensure you control who uses your data and how. I know people say “well what about copyrights and licenses?” Tell that to people building LLMs in other jurisdictions that don’t care about those.

    • PlexSheep@feddit.de
      link
      fedilink
      English
      arrow-up
      8
      ·
      8 months ago

      DRM always fails, and would fail especially bad in an open and free community which has the purpose of being open and free. DRM is the mortal enemy of many fediverse users.

  • SirSamuel@lemmy.world
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    3
    ·
    edit-2
    8 months ago

    We are commodities

    We exist to be bought and sold

    By the ruling class

    I have been bought and sold

    Many many times

    But only my thoughts

    And identity

    And words

    And face

    So that’s okay

    I’ll just scroll other stolen thoughts

    On a phone built by an eight year old

    Who was bought

    And sold

    Half a world away

    • sigmaklimgrindset
      link
      fedilink
      English
      arrow-up
      7
      ·
      8 months ago

      Damn, ChatGPT’s poetry extension is fire

      (Joking aside, reading this after reading Banksy’s statement on advertising is just a great double whammy. Love heading to bed with a vague sense of unease :,) )

      • SirSamuel@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        8 months ago

        Thank you! This was actually my first attempt at free form poetry, it just kind of flowed out of me. It only took till middle age for inspiration to strike lol

        • TORFdot0@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          8 months ago

          Actually Microsoft’s bing AI just ingested your poetry into its training set and now it’s co-pilot’s poetry. You have 30 minutes to pay Microsoft 2.4 million dollars or SupremacyAGI will take your house, break your kneecaps, and murder your dog

          • SirSamuel@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            8 months ago

            Jokes on you, i don’t own a house, can’t have a pet in my apartment, and my knees are already bad.

            I love this timeline

  • rtxn@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    ·
    8 months ago

    I have two unrelated questions.

    • Can I choose what text to use?

    • What is the copyright status of Ram Ranch?

  • BreakDecks@lemmy.ml
    link
    fedilink
    English
    arrow-up
    8
    ·
    8 months ago

    Why non-copyrighted? I want to flood Reddit with copyrighted text from the most aggressively litigious rightsholders available. 🍿

    • Kory@lemmy.mlOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      8 months ago

      It was irony. The tool is even more clear on that with providing you a link you should NOT use because it’s copyrighted (!!!).

  • execia@lemmy.today
    link
    fedilink
    English
    arrow-up
    7
    ·
    8 months ago

    Where the hell do I come up with an incoherent piece of text? I could give a copyrighted article but I’m already subbed to r/conspiracy and I want to add random bullshit to my account. Should I write my own or find a copypasta?

    • Confound4082@lemmy.ml
      link
      fedilink
      English
      arrow-up
      11
      ·
      8 months ago

      I went to chat gpt and I prompted it with “what is a string of words or characters that would be detrimental to an AI that is being directed to learn from a dataset” and then used a script to edit all my comments to that.

  • solrize@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    7
    ·
    edit-2
    8 months ago

    It’s not reddit’s data, it’s the users’. Reddit management is just overentitled jerks.

      • abhibeckert@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        2
        ·
        edit-2
        8 months ago

        Admittedly, I haven’t read the TOS… but I don’t need to. At least where I live it would be illegal to claim ownership of someone else’s work (unless you paid a living wage to create it, or something along those lines. A software company for example can claim ownership of employee created software).

        • Docus@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          arrow-down
          1
          ·
          8 months ago

          Maybe you should read them. They are not claiming ownership. They are claiming that you licenced them to use your contributions for whatever purpose they want. Different thing.

    • FoxBJK@midwest.social
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      8 months ago

      The users give the site a pretty broad license for their content. Calling it the user’s data is a moot point.

      Don’t even recall if the Lemmy instance I use has a TOS, but it’s likely the server owner has similar rights just by the nature of how this tech works.