Natural Logic in NLP


The NatLog project aims to develop an approach to natural language inference based on a model of natural logic, which identifies valid inferences by their lexical and syntactic features, without full semantic interpretation. Traditionally, Stanford has used natural logic for textual entailment; recently, however, the group has been looking at natural logic in the context of inference for recursive neural networks, and as a means of performing light, large-scale inference over a knowledge base of common sense facts.

We have greatly extended past work in natural logic, which has focused solely on semantic containment and monotonicity, to incorporate both semantic exclusion and implicativity. The NatLog system decomposes an inference problem into a sequence of atomic edits linking premise to hypothesis; predicts a lexical entailment relation for each edit using a statistical classifier; propagates these relations upward through a syntax tree according to semantic properties of intermediate nodes; and composes the resulting entailment relations across the edit sequence. Using the NatLog system, we've achieved 70% accuracy and 89% precision on the FraCaS test suite. We've also shown that adding NatLog as a component in the Stanford RTE system leads to accuracy gains of 4%. In addition, we've used the underlying logic itself as a testbed for the development of neural network representation learning systems for natural language understanding.



Recursive Neural Networks Can Learn Logical Semantics [pdf] [code and data]
Samuel R. Bowman, Christopher Potts, and Christopher D. Manning.
Proceedings of the 3rd Workshop on Continuous Vector Space Models and their Compositionality. 2015.

Learning Distributed Word Representations for Natural Logic Reasoning [pdf]
Samuel R. Bowman, Christopher Potts, Christopher D. Manning
Proceedings of the AAAI Spring Symposium on Knowledge Representation and Reasoning. 2015.

NaturalLI: Natural Logic Inference for Common Sense Reasoning [pdf]
Gabor Angeli and Christopher D. Manning
EMNLP 2014.

Can recursive neural tensor networks learn logical reasoning? [pdf]
Samuel R. Bowman
ArXiv preprint 1312.6192, presented at ICLR. 2014.

Synthetic logic [pdf]
Alex Djalali
Linguistic Issues in Language Technology, Volume 9 (2). 2013.

Modeling semantic containment and exclusion in natural language inference [pdf] [ppt]
Bill MacCartney and Christopher D. Manning
The 22nd International Conference on Computational Linguistics (Coling-08), Manchester, UK, August 2008
—received Springer Best Paper Award—

A phrase-based alignment model for natural language inference [pdf]
Bill MacCartney, Michel Galley, and Christopher D. Manning
accepted for presentation at the Conference on Empirical Methods in Natural Language Processing (EMNLP-08), Honolulu, HI, October 2008

Natural logic for textual inference [pdf] [ppt]
Bill MacCartney and Christopher D. Manning
ACL Workshop on Textual Entailment and Paraphrasing, Prague, June 2007

Learning Alignments and Leveraging Natural Logic [pdf]
Nathanael Chambers, Daniel Cer, Trond Grenager, David Hall, Chloe Kiddon, Bill MacCartney, Marie-Catherine de Marneffe, Daniel Ramage, Eric Yeh and Christopher D. Manning
ACL Workshop on Textual Entailment and Paraphrasing, Prague, June 2007

Related Work

Linguistics and Natural Logic [pdf]
George Lakoff
Synthese 22:151-271, 1970


Contact Information

For any comments or questions, please email Sam and Gabor.