This talk is part of the NLP Seminar Series.

Building an Annotated Dataset of Literary Entities and Events

David Bamman, UC Berkeley
Date: 11:00 am - 12:00 pm, May 23 2019
Venue: Room 104, Gates Computer Science Building

Abstract

Literary novels push the limits of natural language processing. While much work in NLP has been heavily optimized toward the narrow domain of contemporary English newswire (and now increasingly social media), literary novels are an entirely different animal--the long, complex sentences in novels strain the limits of syntactic parsers with super-linear computational complexity, their use of figurative language challenges representations of meaning based on neo-Davidsonian semantics, and their long length (ca. 100,000 words on average) rules out existing solutions for problems like coreference resolution that expect a small set of candidate antecedents.

At the same time, fiction drives computational research questions that are uniquely interesting to that domain. In this talk, I will focus on developing computational models that lay the groundwork for a uniquely literary problem: plot. While "plot" itself is a complex abstraction, one contribution of our work here is to decompose it into solvable sub-problems, each of which can be researched and evaluated on its own terms. At the very least, plot involves people (characters), places (the setting where action takes place), time (when those actions take place), and things (objects that are important), all interacting through depicted events.

In this talk, I'll outline our progress to date on two of these sub-problems: recognizing the entities in literary texts (the characters, locations, spaces of interest) and identifying the events that are depicted as having transpired. Both efforts involve the annotation of 200,000 words evenly drawn from 100 different English-language literary texts and building computational models to automatically identify each phenomenon.

This is joint work with Matt Sims, Jerry Park, Sejal Popat and Sheng Shen

Bio

David Bamman is an assistant professor in the School of Information at UC Berkeley, where he works on applying natural language processing and machine learning to empirical questions in the humanities and social sciences. His research often involves adding linguistic structure (e.g., syntax, semantics, coreference) to statistical models of text, and focuses on improving NLP for a variety of languages and domains (such as literary text and social media). Before Berkeley, he received his PhD in the School of Computer Science at Carnegie Mellon University and was a senior researcher at the Perseus Project of Tufts University.