Center Calendar

Why language remains AI-complete, & what that means for human cognition. Dr. Joshua Hartshorne, Asst. Professor, Psychology Department, Boston College

Tuesday, November 28, 2023, 02:00pm - 03:30pm

152 Frelinghuysen Rd, Busch Campus, Psych Bldg, Room 105

Copy to My Calendar (iCal) Download as iCal file

Abstract: Starting with Alan Turing, scientists have long speculated that any artificial system with human-level language understanding would necessarily have human-level intelligence, that language is "AI-complete".  Does the linguistic success of Large Language Models (LLMs) such as ChatGPT mean that they are intelligent, or that Turing and others were wrong? I focus on a deceptively simple aspect of language (pronouns) that has been argued to depend heavily on non-linguistic intelligence. I show that while LLMs at first glance handle pronouns well, their abilities are surprisingly brittle, and they lack the phenomenal flexibility and (for lack of a better term) intelligence that characterizes humans. Through a series of large-scale behavioral experiments and computational modeling, I suggest that human linguistic processing draws on (and requires) robust, generative models of the world akin to those proposed by Theory Theory. I discuss implications for the structure of language, thought, and their interaction.

Bio: Dr. Hartshorne received his Ph.D. in psychology at Harvard University (advisor: Jesse Snedeker) and did his post-doctoral research at MIT with Josh Tenenbaum. Prior to graduate school, he worked with John Monahan, Yuhong Jiang, and Michael Ullman. 

Suggested Readings:

- Levesque, Davis,  Morgenstern ( 2012 ) The Winograd Schema Challenge.
- Kocijan, Davis, Lukasiewicz, Marcus,  Morgenstern ( 2023 ) The defeat of the Winograd Schema Challenge.  Artificial Intelligence.
- Hartshorne, O’Donnell,  Tenenbaum. The causes and consequences explicit in verbs. Language, Cognition,  Neuroscience, 30:6, 716-734.