Natural Logic in NLP
Distributed representations and natural logic
In recent work, we have used natural logic and the surrounding task of natural language inference over surface forms as a focus task within an effort to improve (and to better understand) neural network models that handle sentence meaning. In particular, we have used artificial data based on NatLog to show that tree-structured neural networks produce encodings that generalize meet the fundamental requirements for open-domain natural language inference, and we have subsequently compiled the Stanford NLI corpus, a large labeled corpus intended to serve as a challenge dataset for work using representation learning methods for NLI.
The NatLog project aims to develop an approach to natural language inference based on a model of natural logic, which identifies valid inferences by their lexical and syntactic features, without full semantic interpretation. Traditionally, Stanford has used natural logic for textual entailment; recently, however, the group has been looking at natural logic in the context of inference for recursive neural networks, and as a means of performing light, large-scale inference over a knowledge base of common sense facts.
We have greatly extended past work in natural logic, which has focused solely on semantic containment and monotonicity, to incorporate both semantic exclusion and implicativity. The NatLog system decomposes an inference problem into a sequence of atomic edits linking premise to hypothesis; predicts a lexical entailment relation for each edit using a statistical classifier; propagates these relations upward through a syntax tree according to semantic properties of intermediate nodes; and composes the resulting entailment relations across the edit sequence. Using the NatLog system, we've achieved 70% accuracy and 89% precision on the FraCaS test suite. We've also shown that adding NatLog as a component in the Stanford RTE system leads to accuracy gains of 4%. In addition, we've used the underlying logic itself as a testbed for the development of neural network representation learning systems for natural language understanding.
A large annotated corpus for learning natural language inference
Recursive Neural Networks Can Learn Logical Semantics [pdf] [code and data]
Learning Distributed Word Representations for Natural Logic Reasoning
NaturalLI: Natural Logic Inference for Common Sense Reasoning
Can recursive neural tensor networks learn logical reasoning?
Modeling semantic containment and exclusion in natural language inference
A phrase-based alignment model for natural language inference
Learning Alignments and Leveraging Natural Logic
Linguistics and Natural Logic