Automated reasoning, the ability of computing systems to make new inferences from observed evidence, has been a long-standing goal of artificial intelligence. We are interested in automated reasoning on large knowledge bases (KBs) with rich and diverse semantic types. KBs store massive amounts of rich symbolic facts about real-world entities. An effective and user-friendly way of accessing the information stored in a KB is by issuing queries to it. A challenge for question answering (QA) systems over KBs is to handle queries whose answers are not directly stored (as a simple fact) in the KB. Instead, the QA model needs to reason in order to derive the answer from other observed facts. My work is on building QA systems over structured KBs that can perform such reasoning.
In this talk, I will propose two models for QA over KBs with (i) a nonparametric component that for each query, dynamically retrieves other similar k-nearest neighbors (KNN) training queries along with query-specific subgraphs (or logical forms, if available) and (ii) a parametric component that is trained to identify the (latent) reasoning patterns from the subgraphs of KNN queries and then apply it to the subgraph of the target query. Our approach is inspired by the classical AI paradigm of case-based reasoning (CBR) where a new problem is solved by retrieving and reusing the solution of other similar problems. We show that our model is effective for complex natural language queries where the reasoning pattern is derived from the reasoning patterns of multiple queries and achieves state-of-the-art results on multiple benchmarks. Lastly, leveraging our nonparametric approach, we demonstrate that it is possible to correct wrong predictions of deep QA models without any need for re-training, thus paving the way towards building practical production-ready QA systems.
Rajarshi is a final year Ph.D. candidate at the University of Massachusetts, Amherst where he is advised by Prof. Andrew McCallum. His research is on developing nonparametric, contextual reasoning models for program synthesis and question answering on knowledge graphs and text that are accurate, controllable (debuggable), offer interpretable predictions, and can seamlessly reason with newly arriving information.