This talk is part of the NLP Seminar Series.

Globally Normalized Reader

Jonathan Raiman, OpenAI
Date: 11:00am - 12:00pm, Mar 15 2018
Venue: Room 219, Gates Computer Science Building

Abstract

Rapid progress has been made towards creating question answering models that extract answers from text, with certain models reaching human-level performance on the SQuAD dataset. However, existing neural approaches use expensive bi-directional attention mechanisms or score all possible answer spans, limiting scalability. We propose instead to cast extractive QA as an iterative search problem: select the answer's sentence, start word, and end word. This representation reduces the space of each search step and allows computation to be conditionally allocated to promising search paths. We show that globally normalizing the decision process and back-propagating through beam search makes this representation viable and learning efficient. Empirically we find that the Globally Normalized Reader (GNR), achieves the second highest single model performance (8th sept 2017) on the Stanford Question Answering Dataset (68.4 EM, 76.21 F1 dev) and is 24.7x faster than bi-attention-flow.

We also introduce a data-augmentation method to produce semantically valid examples by aligning named entities to a knowledge base and swapping them with new entities of the same type. This method improves the performance of all models considered in this work and is of independent interest for a variety of NLP tasks.

Bio

Jonathan Raiman is a Research Scientist at OpenAI working on Deep Reinforcement Learning within the DotA team. Previously he worked at Baidu SVAIL on speech synthesis (Deep Voice 1, 2, & 3), and speech recognition (Deep Voice 2), and question answering (Globally Normalized Reader), and on the Dynamic Memory Network at MetaMind. He completed his Masters at MIT in the MERS lab, under Prof. Brian Williams, where his thesis focused on improving human-robot interaction by making the behavior of neural networks more interpretable.