Alaknár

  • 5 Posts
  • 1.27K Comments
Joined 9 months ago
cake
Cake day: June 29th, 2025

help-circle
  • AlaknártoPrivacy@lemmy.mlReddit and FaceID Verification
    link
    fedilink
    English
    arrow-up
    1
    ·
    7 hours ago

    You just replied with a “nuh-uh!” and call me “terminally online”? That’s a good one.

    You seem know a bunch of buzz-words that you don’t fully understand, like “crowd sourcing” in this instance. It’s like a magic wand, “just crowd source it, and it’ll just work”, without realising that - again, unless telepathy is involved - a crowd is still just a bunch of individuals. Without instantaneous real time communication no single individual can spot such a massive pattern as those you are after. Without spotting the “big picture”, the whole thing is pointless.


  • AlaknártoPrivacy@lemmy.mlReddit and FaceID Verification
    link
    fedilink
    English
    arrow-up
    1
    ·
    9 hours ago

    What do you think crowd sourcing means

    It means a bunch of people doing working in very narrow fields that need to be connected by someone with an overarching view, but there are so many so small fields, that it’s impossible for a human to handle. In this particular case.

    Unless you figured out telepathy. Then I retract my statements - a large enough network of directly connected telepaths could do this.


  • AlaknártoPrivacy@lemmy.mlReddit and FaceID Verification
    link
    fedilink
    English
    arrow-up
    1
    ·
    9 hours ago

    Look at this fucking guy. Likely not a bot. But is an example of someone who is posting pattern is suspicious

    You have just defined why your method doesn’t work.

    There is no issue with random people using LLM to craft their messages. The issue is using a network of bots to promote the latest marvel movie

    You either detect AI by their language or you don’t.

    But, I think, I know what you mean. Your idea is like Bat-sonar, the super-totally-not-magical computer he built in the second or third Nolan film that allowed him to spy on everybody and thus detect crimes faster.

    You want a system that would monitor ALL content online and detect “patterns”. Like, “huh, weirdly, we have XXX number of people writing positively about the new JJ Abrams film”, or “check it out, in the past hour we’ve had 43243 comments negative about MAGA”.

    Right?

    If so: mate… You require literal magic to pull it off. WAY too many false positives or just impossible to trace dependencies. You would have to not only monitor for these patterns, but also associate them with any real-world events (ALL events), because maybe a Polish nationalist politician said something about the financing methods of their military, which got popular on russian Twitter, got a funny anti-MAGA retweet by a Ukrainian, ended up as a reaction video on British TikTok, and got posted to Reddit where it got upvoted to r/All and received 43243 100% legitimate comments complaining about MAGA.

    Funnily enough, if anything, MAYBE a complex enough AI system would be capable of finding these patterns, but there’s absolutely no physical possibility of humans doing that.












  • by your claim, the field can have any series of numbers, that there is no way to determine if it is accurate, and the law that this was done to appease is bad, as in not able to obtain its expected result. and so the data is useless.

    Yeah, so this tells me you have no clue what you’re actually talking about.

    You don’t even stop to think for a second that maybe some enterprise setting requires such fields. Maybe they have software that populates account information based on their HR systems’ data, auto-creating user accounts? Maybe they can find a DoB field useful? Zero clue, zero thought, just “I don’t use it, therefore it’s useless”.

    No point in continuing this discussion, I guess.




  • AlaknártoPrivacy@lemmy.mlReddit and FaceID Verification
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 day ago

    I am saying we should be crowd sourcing our collective anger and ADHD or Autism or whatever (…) and instead focus on collection of the worst bot infestations.

    That’s what “being a moderator” is, mate. You want hundreds of thousands of moderators.

    There are patterns. Bots are not random enough that they can’t be identified with large crowd sourced efforts

    You’re wrong.

    It becomes obvious there is a script only when you collect the data and begin to analyze it.

    You just said:

    I am not saying we need to build our own bot detection

    So, which is it?

    It becomes obvious there is a script only when you collect the data and begin to analyze it.

    There’s a massive difference between local news stations receiving a script to read out, and a bot farm having a “be negative, unfriendly, sow chaos” instruction.

    At the start it won’t be accurate

    So, it just won’t work? Got it.

    But as more data is collected it’ll become obvious

    I don’t think you undersand what you’re talking about. Don’t get me wrong, I’m not trying to be contrarian here, I just honestly think that your idea of “AI bots” is kind of like “we have prepared one million sentences, and now our bots will be picking between them to generate whole posts on social networks”.

    I mean, sure, there can be patterns - like the whole “LinkedIn post” style, where most of the time it’s fairly obvious that you’re reading an AI-generated slop… But that’s not what state-entities or even just hackers use. They have access to much more sophisticated content.

    If the bots were that good, these websites would have left their APIs open.

    Reddit’s API is no longer open. Didn’t do a thing to stop bots.

    But they closed them so we can’t collect this data

    You don’t need however many API keys to collect that kind of data. At least not from Reddit.

    Our inaction to do anything when the greatest opportunities are right in front of us but slipping away is a tragedy of this generation.

    Your proposed action is the equivalent of Sisyphus and his stone. Because you really seem to be forgetting that the AI tech is getting better all the time. And that any AI-detection actions you take feed that process. “Oh, they’ve detected these posts? OK, let’s tweak the algo until we get through and then flood them with our content”.

    Let’s even assume that you somehow pull it off and get a 100% detection rate as of right now. Six months down the line that will go down to 20%. Etc. etc. And you’ll be catching thousands of legitimate users in the crossfire.

    An anonymous “proof of humanity” token solves all AI issues without anyone having to spend billions on research and manpower.