This talk is part of the NLP Seminar Series.

The neural architecture of language: Integrative modeling converges on predictive processing

Martin Schrimpf, MIT
Date: 10:00am - 11:00am PT, May 20 2021
Venue: Zoom (link hidden)

Abstract

Language is a quintessentially human ability. For a long time, research has probed the functional architecture of language processing in the mind and brain using diverse brain imaging, behavioral, and computational modeling approaches. However, adequate computationally explicit accounts of how meaning might be extracted from language were sorely lacking. Here, we report major progress on closing this gap by connecting recent artificial neural networks from machine learning to human recordings during language processing. We find that the most powerful models accurately predict neural and behavioral responses across different datasets. Models that perform better at predicting the next word in a sequence also better predict brain measurements – suggesting that predictive processing fundamentally shapes the language comprehension mechanisms in the human brain.

Paper: https://www.biorxiv.org/content/10.1101/2020.06.26.174482v2

Bio

Martin is a PhD candidate at the MIT Brain and Cognitive Sciences department, advised by James DiCarlo. His research involves modeling the brain mechanisms that underlie human object recognition by building and testing artificial neural networks that predict brain activity and behaviors across entire domains of intelligence. Before, he worked at Salesforce Research on deep learning for NLP with Richard Socher, and at Harvard on recurrent visual processing with Gabriel Kreiman. Martin completed his Bachelor (Information Systems) and Master (Software Engineering) degrees in Germany at TUM, LMU, and UNA and co-founded the non-profit startup Integreat.