Natural Logic in NLPOverviewDistributed representations and natural logicIn recent work, we have used natural logic and the surrounding task of natural language inference over surface forms as a focus task within an effort to improve (and to better understand) neural network models that handle sentence meaning. In particular, we have used artificial data based on NatLog to show that tree-structured neural networks produce encodings that generalize meet the fundamental requirements for open-domain natural language inference, and we have subsequently compiled the Stanford NLI corpus, a large labeled corpus intended to serve as a challenge dataset for work using representation learning methods for NLI. NatLogThe NatLog project aims to develop an approach to natural language inference based on a model of natural logic, which identifies valid inferences by their lexical and syntactic features, without full semantic interpretation. Traditionally, Stanford has used natural logic for textual entailment; recently, however, the group has been looking at natural logic in the context of inference for recursive neural networks, and as a means of performing light, large-scale inference over a knowledge base of common sense facts.
We have greatly extended past work in natural logic, which has focused solely on semantic containment and monotonicity, to incorporate both semantic exclusion and implicativity. The NatLog system decomposes an inference problem into a sequence of atomic edits linking premise to hypothesis; predicts a lexical entailment relation for each edit using a statistical classifier; propagates these relations upward through a syntax tree according to semantic properties of intermediate nodes; and composes the resulting entailment relations across the edit sequence. Using the NatLog system, we've achieved 70% accuracy and 89% precision on the FraCaS test suite. We've also shown that adding NatLog as a component in the Stanford RTE system leads to accuracy gains of 4%. In addition, we've used the underlying logic itself as a testbed for the development of neural network representation learning systems for natural language understanding. People
Papers
A large annotated corpus for learning natural language inference
[pdf]
[corpus page]
Recursive Neural Networks Can Learn Logical Semantics [pdf] [code and data]
Learning Distributed Word Representations for Natural Logic Reasoning
[pdf]
NaturalLI: Natural Logic Inference for Common Sense Reasoning
[pdf]
Can recursive neural tensor networks learn logical reasoning?
[pdf]
Synthetic logic
[pdf]
Modeling semantic containment and exclusion in natural language inference
[pdf]
[ppt]
A phrase-based alignment model for natural language inference
[pdf]
Natural logic for textual inference
[pdf]
[ppt]
Learning Alignments and Leveraging Natural Logic
[pdf] Related Work
Linguistics and Natural Logic
[pdf] Resources
Contact Information |