- cross-posted to:
- aicompanions@lemmy.world
- hackernews@lemmy.smeargle.fans
- cross-posted to:
- aicompanions@lemmy.world
- hackernews@lemmy.smeargle.fans
I often see a lot of people with outdated understanding of modern LLMs.
This is probably the best interpretability research to date, by the leading interpretability research team.
It’s worth a read if you want a peek behind the curtain on modern models.
That’s a chicken and egg situation tho. Is the bias a result of a mind? Or is it the result of being trained on data with common human biases all put together by humans? Are these traits actually measurable or are we just anthropomorphizing a machine like we do everything else?