Natural language is first and foremost an instrument of interaction, where interlocutors produce and comprehend language to relay information to accomplish their intents. This talk focuses on challenges and opportunities that arise from this interactive nature of language. The response of participants to the language they comprehend can form a strong learning signal for the party that produced the language. Did I achieve my intent? I will show how to use this signal to learn to produce natural language instructions. The sequential nature of such interactions makes them natural for reinforcement learning (RL), but developing benchmarks for RL conditioned on language has been hindered because computing rewards requires resolving language semantics. I will describe a new benchmark that offers an approach to address this challenge. Finally, core to linguistic interaction is the use of abstraction to communicate concepts in a generalizable way. I will describe a new resource to study this phenomena, and show how it sheds light on the generalization abilities of language-and-vision pre-trained models.
Yoav Artzi is an Associate Professor in the Department of Computer Science and Cornell Tech at Cornell University. His research focuses on developing learning methods for natural language understanding and generation in automated interactive systems. He received an NSF CAREER award, and his work was acknowledged by awards and honorable mentions at ACL, EMNLP, NAACL, and IROS. Yoav holds a B.Sc. from Tel Aviv University and a Ph.D. from the University of Washington.