Natural Logic in NLP

Overview

Distributed representations and natural logic

In recent work, we have used natural logic and the surrounding task of natural language inference over surface forms as a focus task within an effort to improve (and to better understand) neural network models that handle sentence meaning. In particular, we have used artificial data based on NatLog to show that tree-structured neural networks produce encodings that generalize meet the fundamental requirements for open-domain natural language inference, and we have subsequently compiled the Stanford NLI corpus, a large labeled corpus intended to serve as a challenge dataset for work using representation learning methods for NLI.

NatLog

The NatLog project aims to develop an approach to natural language inference based on a model of natural logic, which identifies valid inferences by their lexical and syntactic features, without full semantic interpretation. Traditionally, Stanford has used natural logic for textual entailment; recently, however, the group has been looking at natural logic in the context of inference for recursive neural networks, and as a means of performing light, large-scale inference over a knowledge base of common sense facts.

We have greatly extended past work in natural logic, which has focused solely on semantic containment and monotonicity, to incorporate both semantic exclusion and implicativity. The NatLog system decomposes an inference problem into a sequence of atomic edits linking premise to hypothesis; predicts a lexical entailment relation for each edit using a statistical classifier; propagates these relations upward through a syntax tree according to semantic properties of intermediate nodes; and composes the resulting entailment relations across the edit sequence. Using the NatLog system, we've achieved 70% accuracy and 89% precision on the FraCaS test suite. We've also shown that adding NatLog as a component in the Stanford RTE system leads to accuracy gains of 4%. In addition, we've used the underlying logic itself as a testbed for the development of neural network representation learning systems for natural language understanding.

People

Papers

A large annotated corpus for learning natural language inference [pdf] [corpus page]
Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning.
EMNLP 2015.
—received Best Data Set or Resource Award—

Recursive Neural Networks Can Learn Logical Semantics [pdf] [code and data]
Samuel R. Bowman, Christopher Potts, and Christopher D. Manning.
Proceedings of the 3rd Workshop on Continuous Vector Space Models and their Compositionality. 2015.

Learning Distributed Word Representations for Natural Logic Reasoning [pdf]
Samuel R. Bowman, Christopher Potts, Christopher D. Manning
Proceedings of the AAAI Spring Symposium on Knowledge Representation and Reasoning. 2015.

NaturalLI: Natural Logic Inference for Common Sense Reasoning [pdf]
Gabor Angeli and Christopher D. Manning
EMNLP 2014.

Can recursive neural tensor networks learn logical reasoning? [pdf]
Samuel R. Bowman
ArXiv preprint 1312.6192, presented at ICLR. 2014.

Synthetic logic [pdf]
Alex Djalali
Linguistic Issues in Language Technology, Volume 9 (2). 2013.

Modeling semantic containment and exclusion in natural language inference [pdf] [ppt]
Bill MacCartney and Christopher D. Manning
The 22nd International Conference on Computational Linguistics (Coling-08), Manchester, UK, August 2008
—received Springer Best Paper Award—

A phrase-based alignment model for natural language inference [pdf]
Bill MacCartney, Michel Galley, and Christopher D. Manning
accepted for presentation at the Conference on Empirical Methods in Natural Language Processing (EMNLP-08), Honolulu, HI, October 2008

Natural logic for textual inference [pdf] [ppt]
Bill MacCartney and Christopher D. Manning
ACL Workshop on Textual Entailment and Paraphrasing, Prague, June 2007

Learning Alignments and Leveraging Natural Logic [pdf]
Nathanael Chambers, Daniel Cer, Trond Grenager, David Hall, Chloe Kiddon, Bill MacCartney, Marie-Catherine de Marneffe, Daniel Ramage, Eric Yeh and Christopher D. Manning
ACL Workshop on Textual Entailment and Paraphrasing, Prague, June 2007

Related Work

Linguistics and Natural Logic [pdf]
George Lakoff
Synthese 22:151-271, 1970

Resources

Contact Information

For any comments or questions, please email Sam and Gabor.