On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜, Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, Shmargaret Shmitchell, 2021FAccT '21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (Association for Computing Machinery)DOI: 10.1145/3442188.3445922 - This seminal paper critically examines the limitations and risks of large language models, particularly their lack of genuine comprehension and the dangers of attributing human-like understanding.
The Long Road to Understanding, Michael C. Frank, Noah D. Goodman, 2022Annual Review of Developmental Psychology, Vol. 9 (Annual Reviews)DOI: 10.1146/annurev-devpsych-032221-020527 - Discusses the fundamental differences between human and LLM 'understanding,' emphasizing that LLMs are sophisticated statistical systems rather than genuinely comprehending meaning.