This talk is part of the NLP Seminar Series.

Neural Text Summarization: A Critical Perspective and Steps Towards Consistency

Wojciech Kryściński, Salesforce Research
Date: 11:00am - 12:00pm, Feb 13 2020
Venue: Room 392 Gates Computer Science Building

Abstract

Neural Text Summarization is a challenging task within Natural Language Processing that requires advanced language understanding and generation. Despite the substantial efforts made by the NLP research community in recent times, the progress in the field is slow and future steps are unclear.

In the first part of the talk we critically evaluate key ingredients of the current research setup: datasets, evaluation metrics, and models, and highlight three primary short-comings: 1) automatically collected datasets leave the task underconstrained and may contain noise detrimental to training and evaluation, 2) current evaluation protocol is weakly-correlated with human judgment and does not account for important characteristics such as factual correctness, 3) models overfit to layout biases of current datasets and offer limited diversity in their outputs.

Then, we introduce FactCC, a weakly-supervised, model-based approach for verifying factual consistency and identifying conflicts between source documents and a generated summary. Training data for the model is generated by applying a series of rule-based transformations to the sentences of source documents. The FactCC model is then trained jointly for three tasks: 1) identify whether sentences remain factually consistent after transformation, 2) extract a span in the source documents to support the consistency prediction, 3) extract a span in the summary sentence that is inconsistent if one exists. Transferring this model to summaries generated by several state-of-the-art models reveals that this highly scalable approach substantially outperforms previous models, including those trained with strong supervision using standard datasets for natural language inference and fact-checking.

Bio

Wojciech Kryściński is a Research Scientist at Salesforce Research. He works on problems related to Natural Language Processing, recently focusing on Neural Text Summarization. Prior to joining Salesforce, Wojciech completed B.Sc. degrees in Computer Science and Econometrics at AGH-UST, and an M.Sc. degree in Machine Learning at KTH Royal Institute of Technology.