This talk is part of the NLP Seminar Series.

The Dangers of Underclaiming: Reasons for Caution When Reporting How NLP Systems Fail

Sam Bowman, NYU
Date: 11:00am - 12:00 noon PT, Apr 14 2022
Venue: Zoom (link hidden)

Abstract

Researchers in NLP often frame and discuss research results in ways that serve to deemphasize the field's successes, often in response to the field's widespread hype. Though well-meaning, this has yielded many misleading or false claims about the limits of our best technology. This is a problem, and it may be more serious than it looks: It harms our credibility in ways that can make it harder to mitigate present-day harms, like those involving biased systems for content moderation or resume screening. It also limits our ability to prepare for the potentially enormous impacts of more distant future advances. This paper urges researchers to be careful about these claims and suggests some research directions and communication strategies that will make it easier to avoid or rebut them.

Bio

Sam Bowman has been on the faculty at NYU since 2016, when he completed a PhD with Chris Manning and Chris Potts at Stanford. At NYU, he is a member of the Center for Data Science, the Department of Linguistics, and Courant Institute's Department of Computer Science. His research focuses primarily on data, evaluation, and crowdsourcing in the context of large neural network language models, and additionally on applications of machine learning to scientific questions in linguistic syntax and semantics. He is the senior organizer behind the GLUE and SuperGLUE benchmark competitions and he has received a 2015 EMNLP Best Resource Paper Award, a 2017 Google Faculty Research Award, and a 2021 NSF CAREER award.