This talk is part of the NLP Seminar Series.

Overcoming spurious correlations in natural language understanding: success and failures

He He, NYU
Date: 11:00am - 12:00pm, November 17th 2022
Venue: Zoom (link hidden)

Abstract

Model robustness and spurious correlations have received increasing attention in the NLP community from method development to model interpretation. The core question is how to build models that are "right for the right reasons" and not taking shortcuts such as superficial linguistic cues. In this talk, I will go through both success stories and unexpected failures in our quest for robust natural language understanding. I will conclude with lessons learned and thoughts on robustness in the context of large language models.

Bio

He He is an Assistant Professor in the Department of Computer Science and the Center for Data Science at New York University. She obtained her PhD in Computer Science at the University of Maryland, College Park. Before joining NYU, she spent a year at AWS AI and was a post-doc at Stanford University before that. She is interested in building robust and trustworthy NLP systems in human-centered settings. Her recent research focus includes robust language understanding, collaborative writing, and understanding capabilities and issues of large language models.