Semantic parsing is the problem of translating human language into computer language, and therefore is at the heart of natural language understanding. This problem is difficult because it involves learning to predict a complex, structured output -- a program -- from weak supervision, while also solving a host of interrelated problems, such as entity linking. Recent work has demonstrated that neural semantic parsers that ignore these challenges still achieve near state-of-the-art results on many data sets. In this talk, I'll present a novel neural semantic parser that addresses these three challenges using several architectural innovations. An empirical evaluation of this parser demonstrates that it achieves state-of-the-art results on WikiTableQuestions.
Jayant Krishnamurthy is a research scientist at Semantic Machines. Previously, he was a research scientist at the Allen Institute for Artificial Intelligence, where he developed question answering algorithms for science exams. He received his Ph.D. in Computer Science from Carnegie Mellon University in 2015 under Tom Mitchell. His research interests are natural language understanding for question answering, dialogue, and grounded language learning.