cross-posted from: https://lemmings.world/post/21993947
Since I suggested that I’m willing to hook my computer to an LLM model and to a mastodon account, I’ve gotten vocal anti AI sentiments. Im wondering if fediverse has made a plug in to find bots larping as people, as of now I haven’t made the bot and I won’t disclose when I do make the bot.
What I would expect to happen is: their posts quickly start getting many downvotes and comments saying they sound like an AI bot. This, in turn, will make it easy for others to notice and block them individually. Other than that, I’ve never heard of automated solutions to detect LLM posting.
Ahhhhh I doubt average Lemmy users are smart enough to detect LLM content. I already thought of a few ways to find LLM bots
The further get down this thread, the more you sound like a person I don’t want to deal with. And looking at the downvotes, I’m not the only one.
If you want people blocking you, perhaps followed by communities and instances blocking you as well, carry on.
That’s fine if people don’t want to deal with me I never interacted with them before this thread (most likely)
Imo their style of writing is very noticeable. You can obcure that by prompting LLM to deliberately change that, but I think it’s still often noticeable, not only specific wordings, but higher-level structure of replies as well. At least, that’s always been the case for me with ChatGPT. Don’t have much experience with other models.
That’s not entirely true. University assignments are scanned for signs of LLM use, and even with several thousand words per assignment, a not insignificant proportion comes back with an ‘undecided’ verdict.
With human post-processing it’s definitely more complicated. Bots usually post fully automatic content, without human supervision and editing.