OpenAI just admitted it can’t identify AI-generated text. That’s bad for the internet and it could be really bad for AI models.::In January, OpenAI launched a system for identifying AI-generated text. This month, the company scrapped it.

  • gedhrel@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    I think you’re trying to handwave at someone who knows more about the steganographic watermarking approach than you do.

    • cerevant@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      11 months ago

      AI content isn’t watermarked, or detection would be trivial. What he’s talking about is that certain words have a certain probability of appearing after certain other words in a certain context. While there is some randomness to the output, certain words or phrases are unlikely to appear because the data the model was based on didn’t use them.

      All I’m saying is that the more a writer’s writing style and word choice are similar to the data set, the more likely their original content would be flagged as AI generated.