AI models are always surprising us, not just in what they can do, but also in what they can't, and why. An interesting new behavior is both superficial
Allan Turing was a remarkable and talented human being that was clearly very good at what he did. There is nothing in his field of expertise that qualifies him to have a very good understanding of intelligence. I mean even the Turing test is kind of bad at estimating intelligence. LLMs can already pass them and they are not intelligent.
Ah I see the issue. You are conflating Artificial General Intelligence with the entire field of Artificial Intelligence. Very common misconception.
AI is a remarkably broad field that includes but is not limited to AGI. AI is a word used for any function that a computer does that approximates intelligence. That could be as simple as pathfinding, flocking, and balancing, or as complex as object recognition, language, and logic.
Are you telling me that Alan Turing didn’t know what he was talking about?
Allan Turing was a remarkable and talented human being that was clearly very good at what he did. There is nothing in his field of expertise that qualifies him to have a very good understanding of intelligence. I mean even the Turing test is kind of bad at estimating intelligence. LLMs can already pass them and they are not intelligent.
Ah I see the issue. You are conflating Artificial General Intelligence with the entire field of Artificial Intelligence. Very common misconception.
AI is a remarkably broad field that includes but is not limited to AGI. AI is a word used for any function that a computer does that approximates intelligence. That could be as simple as pathfinding, flocking, and balancing, or as complex as object recognition, language, and logic.