Getting a Piece of the Action (Items)

Matthew Purver
Computational Semantics Lab, CSLI, Stanford University

Abstract

Understanding spoken language is difficult enough in one-to-one human-machine interaction. Now consider an artificial agent listening in on multi-party, human-human meetings. In this situation, the job is even harder: there are multiple speakers; high speech recognition error rates; unconstrained, informal language; little knowledge of the domain being discussed; and no opportunity to direct the conversation or ask for clarification.

This talk presents joint work by CSLI's Semlab (see http://godel.stanford.edu/) in which we are attempting to combine statistical and rule-based methods to robustly extract useful information: identifying topics of discussion and detecting action items, for use in summarization, post-meeting browsing, and populating a user's to-do list.