In this talk, I will present novel computational models that integrate reinforcement learning with language understanding. First, I will exhibit a framework for utilizing textual descriptions to assist cross-domain policy transfer for reinforcement learning. Using a model-aware policy that is conditioned on descriptions allows us to quickly bootstrap learning on unseen environments by leveraging knowledge encoded in text. Our approach significantly outperforms existing techniques on both transfer and multitask scenarios. Second, I will consider the task of spatial reasoning in a simulated environment, where an agent can act and receive rewards. The interpretation of spatial references is highly contextual, requiring joint inference over both language and the environment. Our proposed model learns a representation of the world steered by instruction text, allowing for precise alignment of local neighborhoods with corresponding verbalizations, while also handling global references in the instructions. The model outperforms state-of-the-art approaches on several metrics, yielding a 45% reduction in goal localization error.
Karthik Narasimhan is currently a Research Scientist at OpenAI and will join Princeton as an assistant professor in fall 2018. He received his PhD from MIT, advised by Prof. Regina Barzilay, where he was a member of CSAIL and the MIT NLP group. His research interests are in language grounding and deep reinforcement learning, with a view towards building intelligent agents that learn to handle the dynamics of the world through experience and existing human knowledge (ex. text). Specifically, he is excited about developing autonomous systems that can acquire language understanding through interaction with their environment while also utilizing textual knowledge to drive their decision making.