This talk is part of the NLP Seminar Series.

Scaling Up Reading Comprehension

Eunsol Choi, University of Washington
Date: 11:00am - 12:00pm, Nov 9 2017
Venue: Room 392, Gates Computer Science Building


In this talk, I will present recent work on reading comprehension. Recent models have made significant progress toward answering questions with high accuracy given only the textual evidence. However, they can struggle to process long evidence documents and more compositional questions that require combining evidence from multiple sentences. There has also been relatively little work on using the models in other settings, e.g. to solve other related NLP tasks. In this talk, I present new approaches for these outstanding challenges. I will first describe a reading comprehension approach that can efficiently scale to longer documents while maintaining or even improving the performance of state-of-the-art models. Inspired by how people first skim the document, identify relevant parts, and carefully read these parts to produce an answer, we combine a coarse, fast model for selecting relevant sentences and a more expensive model for producing the answer from those sentences. Second, I will present a challenging reading comprehension dataset, TriviaQA, that requires models capable of scaling to longer documents as well as reasoning across multiple sentences. Lastly, I will connect how reading comprehension models, exclusively used for question answering tasks so far, can be extended to solve zero-shot relation extraction. Together, these results suggest a number of possible directions for designing the next direction of reading compressions models, as I will briefly sketch.


Eunsol Choi is a Ph.D. candidate in the Paul G. Allen School of Computer Science and Engineering at the University of Washington, advised by Yejin Choi and Luke Zettlemoyer. Her research focuses on natural language processing, specifically applying machine learning to recover semantics from text. She develops techniques for extracting information about entities from text, and answering natural language questions automatically using large-scale databases or unconstructed text. Prior to UW, she did her undergraduate study at Cornell University.