People can efficiently learn new concepts (e.g, the concept of important emails) from natural language explanations (e.g., ‘late night emails from my advisor are definitely important’). In this talk, I'll present two approaches towards enabling language-based learning in context of classifications tasks. In the first part of the talk, I'll explore the problem of interactive feature space construction via language through an application of semantic parsing. I'll present a method for jointly (1) learning to interpret open-ended natural language explanations, and (2) learning the classification models, by using explanations in conjunction with a small number of labeled examples of the concept. The approach leverages the pragmatics of language to guide parsing, does not require labeling explanations with semantic annotations. In the second part of the talk, I'll present an approach for few-shot learning of classification models from natural language explanations. The method leverages the semantics of linguistic quantifiers for generalization, and does not require labeled classification instances. Empirical evaluations with real and synthetic classification tasks show competitive classification performance, and suggest that conversational language can be a promising alternative to guide classifier learning in scenarios with limited or no labeled data.
Shashank Srivastava is a freshly minted PhD from the Machine Learning department at CMU, with interests in conversational learning, natural language understanding, and AI. Shashank’s dissertation research focuses on helping machines learn from human interaction. Previously, Shashank did bad things with other people’s money as an algorithmic trader at Tower Research LLC. Shashank has an undergraduate degree in Computer Science from IIT Kanpur, and a Master’s degree in language technologies from CMU.