This talk is part of the NLP Seminar Series.

What It Takes to Control Societal Bias in Natural Language Processing

Kai-Wei Chang, University of California Los Angeles
Date: 11:00 pm - 12:00 pm, Oct 25 2018
Venue: Room 392, Gates Computer Science Building

Abstract

Natural language processing techniques play important roles in our daily life. Despite these methods being successful in various applications, they run the risk of exploiting and reinforcing the societal biases (e.g. gender bias) that are present in the underlying data. For instance, an automatic resume filtering system may inadvertently select candidates based on their gender and race due to implicit associations between applicant names and job titles, causing the system to perpetuate unfairness potentially. In this talk, I will describe a collection of results that quantify and control implicit societal biases in a wide spectrum of vision and language tasks, including word embeddings, coreference resolution, and visual semantic role labeling. These results lead to greater control of NLP systems to be socially responsible and accountable.

Bio

Kai-Wei Chang is an assistant professor in the Department of Computer Science at the University of California Los Angeles. His research interests include designing robust machine learning methods for large and complex data and building language processing models for social good applications. Kai-Wei has published broadly in machine learning, natural language processing, and artificial intelligence and has been involved in developing machine learning libraries (e.g., LIBLINEAR) that are being used widely by the research community. His awards include the EMNLP Best Long Paper Award (2017), the KDD Best Paper Award (2010), and the Okawa Research Grant Award (2018). Additional information is available at http://kwchang.net.