This talk is part of the NLP Seminar Series.

A Distributional View of Trustworthy Natural Language Processing

Jacob Eisenstein, Google Deepmind
Date: 11:00am - 12:00pm, January 25th 2024
Venue: Room 287, Gates Computer Science Building

Abstract

Historically, progress in natural language processing has been measured by average-case performance on benchmark datasets. But as NLP systems have become increasingly capable, we find we want something more. There is no shortage of characterizations for what is missing: NLP systems should be robust, reliable, reproducible, and safe; they should be “right for the right reasons”; they should be explainable, interpretable, fair, responsible, truthful, unbiased, and ethical. But can we move beyond a wish list of buzzwords to identify underlying principles that might make it possible to improve NLP systems across many of these criteria at once? In this talk, I will present three research efforts within trustworthy natural language processing: dialect-robust language understanding, explainability by design, and uncertainty-aware reward models. While the goals of these efforts are diverse, in each case the motivation and approach draw on causality and distribution shift.

Bio

Jacob Eisenstein is a research scientist at Google, where he is focused on making language technology more robust and trustworthy. He was previously on the faculty of the Georgia Institute of Technology, where he supervised six successful doctoral dissertations, received the NSF CAREER Award for research on computational sociolinguistics, and wrote a textbook on natural language processing. He completed his Ph.D. at MIT, winning the George M. Sprowls award for a dissertation on computational models of speech and gesture. Thanks to his brief appearance in the documentary film "If These Knishes Could Talk", Jacob has a Bacon number of 2.