I mean, there might be a secret AI technology that is so advanced to the point that it can mimic a real human, make posts and comments that looks like its written by a human and even intentionally doing speling mistakes to simulate human errors. How do we know that such AI hasn’t already infiltrated the internet and everything that you see is posted by this AI? If such AI actually exists, it’s probably so advanced that it almost never fails barring rare situations where there is an unexpected errrrrrrrrrorrrrrrrrrrr…

[Error: The program “Human_Simulation_AI” is unresponsive]

  • Hamartiogonic
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    1 year ago

    Look into the research of Large Language Models (LLMs). Even the latest and greatest model has some issues that come up under rigorous testing. For example, GPT-4 (the one used by Bing) fails miserably if you ask: “How many words will there be in your next answer?”

    You can spot an older LLM by asking about relationships that require some understanding of the real world. For example: “I found a shirt under the car, but it was wet. Which one was wet?” GPT-4 knows enough about the world that it makes more sense if the shirt was wet, but older models would have failed this question. With every new LLM, there are always some issues, so look up what they are.

    Tom Scott made an interesting video about what the situation was 3 years ago. Obviously, LLMs are a fast moving target right now, so that video aged like milk.