Psycholinguistics and computational linguistics are the two fields most dedicated to accounting for the computational operations required to understand natural language. Today, both fields find themselves responsible for understanding the behaviors and inductive biases of "black-box" systems: the human mind and artificial neural-network language models (NLMs), respectively. In this talk, I argue that the theory and controlled experimental paradigms of psycholinguistics can help advance our understanding of what human-like grammatical generalizations NLMs do and do not exhibit. Since contemporary NLMs can be trained on a human lifetime’s worth of text or more, these investigations are not only of practical relevance but also bear on questions of learnability at the heart of linguistic theory. We find that NLMs trained on naturalistic corpora exhibit a range of subtle behaviors, including embedding-depth tracking and garden-pathing over long stretches of text, that suggest representations homologous to incremental syntactic state in human language processing. These NLMs also learn abstract word-order preferences and many generalizations about the long-distance filler-gap dependencies that are a hallmark of natural language syntax, perhaps most surprisingly many filler-gap "island" constraints. At the same time, we find advantages for models with a symbolic component when training data scale is small. I also comment on the extent to which the departures of recurrent neural network language models from the predictions of the "competence" grammars developed in generative linguistics might provide a "performance" account of human language processing.
Roger Levy is Associate Professor in the Department of Brain and Cognitive Sciences at the Massachusetts Institute of Technology. He directs MIT's Computational Psycholinguistics Laboratory. Before coming to MIT, Roger was faculty in the Department of Linguistics at UC San Diego.