This talk is part of the NLP Seminar Series.

Natural Language Decathlon: Multitask Learning as Question Answering

Bryan McCann, Nitish Keskar, Richard Socher, Salesforce Research
Date: 11:00 pm - 12:00 pm, Oct 04 2018
Venue: Room 392, Gates Computer Science Building

Abstract

Deep learning has improved the performance on many natural language processing (NLP) tasks individually. However, general NLP models cannot emerge within a paradigm that focuses on the particularities of a single metric, dataset, and task. We introduce the Natural Language Decathlon (decaNLP), a challenge that spans ten tasks: question answering, machine translation, summarization, natural language inference, sentiment analysis, semantic role labeling, zero-shot relation extraction, goal-oriented dialogue, semantic parsing, and commonsense pronoun resolution. We cast all tasks as question answering over a context. Furthermore, we present a new Multitask Question Answering Network (MQAN) jointly learns all tasks in decaNLP without any task-specific modules or parameters in the multitask setting. MQAN shows improvements in transfer learning for machine translation and named entity recognition, domain adaptation for sentiment analysis and natural language inference, and zero-shot capabilities for text classification. We demonstrate that the MQAN's multi-pointer-generator decoder is key to this success and performance further improves with an anti-curriculum training strategy. Though designed for decaNLP, MQAN also achieves state of the art results on the WikiSQL semantic parsing task in the single-task setting. We release code for procuring and processing data, training and evaluating models, and reproducing all experiments for decaNLP.

Bio

Bryan McCann is a Research Scientist at Salesforce. He focuses on transfer learning and multitask learning for natural language processing. Most recently, Bryan proposed the Natural Language Decathlon (decaNLP) and a Multitask Question Answering Network to tackle all ten tasks in decaNLP. Before decaNLP, he showed that the intermediate representations, or context vectors (CoVe), of machine translation systems carry information that aids learning in question answering and text classification systems. Prior to working at Salesforce, Bryan studied at Stanford University, where he completed a B.S and M.S in Computer Science as well as a B.A in Philosophy.

Nitish Keskar is a Senior Research Scientist at Salesforce Research in Palo Alto where he works on Deep Learning and its applications to Natural Language Processing, Reinforcement Learning and Computer Vision. He is specifically interested in challenges of scalability, training and generalization. Nitish received his PhD from Northwestern University in 2017 where he worked on second-order methods for nonsmooth and stochastic optimization.

Richard Socher is Chief Scientist at Salesforce and an adjunct professor at the Stanford Computer Science Department. He leads the company’s research efforts and brings state of the art artificial intelligence solutions into the platform. Prior, Richard was the CEO and founder of MetaMind, a startup acquired by Salesforce in April 2016. MetaMind’s deep learning AI platform analyzes, labels and makes predictions on image and text data so businesses can make smarter, faster and more accurate decisions. Richard was awarded the Distinguished Application Paper Award at the International Conference on Machine Learning (ICML) 2011, the 2011 Yahoo! Key Scientific Challenges Award, a Microsoft Research PhD Fellowship in 2012, a 2013 "Magic Grant" from the Brown Institute for Media Innovation, the best Stanford CS PhD thesis award 2014 and the 2014 GigaOM Structure Award. He is currently a member of the World Economic Forum's 'Young Global Leaders' Class of 2017 and was recently appointed to the Board of Directors for the Global Fund for Women.