I’m not sure if you’re disagreeing with the essay or not? But in any case what you’re describing is in the same vein, that is simply repeating a word without knowing what it actually means in context is exactly what LLMs do. They can get pretty good at getting it right most of the times but without actually being able to learn the concept and context of ‘table’ they will never be able to use it correctly 100% of the time. Or even more importantly for AGI apply reason and critical thinking. Much like a child repeating a word without much clue what it actually means.
Just for fun, this is what Gemini has to say:
Here’s a breakdown of why this “parrot-like” behavior hinders true AI:
- Lack of Conceptual Grounding: LLMs excel at statistical associations. They learn to predict the next word in a sequence based on massive amounts of text data. However, this doesn’t translate to understanding the underlying meaning or implications of those words.
- Limited Generalization: A child learning “table” can apply that knowledge to various scenarios – a dining table, a coffee table, a work table. LLMs struggle to generalize, often getting tripped up by subtle shifts in context or nuanced language.
- Inability for Reasoning and Critical Thinking: True intelligence involves not just recognizing patterns but also applying logic, identifying cause and effect, and drawing inferences. LLMs, while impressive in their own right, fall short in these areas.
I don’t get the game tbh. At first it was cozy then it turned into work. Which I know people like, some at least. But I work enough in the day to not want to work when I play.