This talk is part of the NLP Seminar Series.

Semantic Role Labeling with Labeled Span Graph Networks

Luheng He, Google AI Language / University of Washington
Date: 11:00 pm - 12:00 pm, Oct 11 2018
Venue: Room 392, Gates Computer Science Building


Semantic role labeling (SRL) systems aim to recover the predicate-argument structure of a sentence, to determine “who did what to whom”, “when”, and “where”. In this talk, I will describe my recent SRL work showing that relatively simple and general purpose neural architectures can lead to significant performance gains, including a over 40% error reduction over long-standing pre-neural performance levels. These approaches are relatively simple because they process the text in an end-to-end manner, without relying on the typical NLP pipeline (e.g. POS-tagging or syntactic parsing). They are general purpose because, with only slight modifications, they can be used to learn state-of-the-art models for related semantics problems. The final architecture I will present, which we call Labeled Span Graph Networks (LSGNs), opens up exciting opportunities to build a single, unified model for end-to-end, document-level semantic analysis.


Luheng is a research scientist at Google AI Language. She finished her Ph.D. at the University of Washington, advised by Luke Zettlemoyer. Her research focuses on semantic role labeling (SRL) and explores other NLP structured prediction problems. She is excited about developing more accurate models and more scalable data collection methods to improve SRL. She built DeepSRL, a BiLSTM-based neural SRL model for PropBank. She also introduced QA-SRL, a question-answer based SRL annotation scheme that allows us to gather SRL data from annotators without linguistic training.