- cross-posted to:
- aicompanions@lemmy.world
- hackernews@lemmy.smeargle.fans
- cross-posted to:
- aicompanions@lemmy.world
- hackernews@lemmy.smeargle.fans
I often see a lot of people with outdated understanding of modern LLMs.
This is probably the best interpretability research to date, by the leading interpretability research team.
It’s worth a read if you want a peek behind the curtain on modern models.
There is no mind. It’s pretty clear that these people don’t understand their own models. Pretending that there’s a mind and the other absurd anthropomorphisms doesn’t inspire any confidence. Claude is not a person jfc.
You’re reading the title too literally. “Mind” is only mentioned once in the entire article, and that’s in the title.