This talk is part of the NLP Seminar Series.

Structure and Interpretation of Neural Codes

Jacob Andreas, University of California, Berkeley
Date: 11:00am - 12:00pm, Aug 31 2017
Venue: Room B12 Gates Computer Science Building (B12 is in the basement. Access is via elevator or from the right entrance of the building).


I'll be presenting recent work on a family of techniques for automatically translating hidden representations computed by deep networks into natural language.

We'll begin by analyzing deep decentralized multiagent policies with learned communication protocols. While a number of approaches have been recently proposed for learning such policies, interpretation of their induced communication strategies has remained a challenge. We'll describe a method for automatically extracting a phrasebook that maps between learned message vectors and natural language strings.

Next, we'll show how the same technique can be applied to a much more general class of models. We'll analyze a standard neural encoder--decoder model trained to perform an autoencoding task, and show that it computes representations with simple geometric regularities corresponding to familiar compositional structures in natural language.


Jacob Andreas is a fifth-year PhD student at UC Berkeley working with Dan Klein. His current research focuses on using natural language to more effectively train and understand machine learning models. Jacob received a B.S. from Columbia in 2012 and an M.Phil. from Cambridge in 2013. He was a Churchill scholar from 2012--2013, an NSF graduate fellow from 2013--2016, and is currently supported by a Facebook fellowship.