Members of the Stanford NLP Group pursue research in a broad variety of topics:

Information Extraction

Semantic Parsing

  • We are interested in mapping utterances to deep meaning representations that take into account the compositional and quantification structure of language.

Text to 3D Scene Generation

  • We are interested in how to automatically generate 3D scenes from a natural text description. This involves the grounding of object references, resolving spatial terms, and inferring implicit knowledge about how objects are typically arranged.

Sentiment and Social Meaning

Dropout Learning and Feature Noising

  • Algorithms that aim to preventing feature co-adaptation using fast dropout training by sampling from or integrating a Gaussian approximation, or equivalently as adaptive regularizers, which can be generalized to semi-supervised learning settings.

Deep Learning in Natural Language Processing

  • The use of continuous-space distributed representations (neural nets) for tackling various problems in natural language processing and vision, including parsing, sentiment analysis and paraphrase detection.

Parsing & Tagging

  • Algorithms for assigning part of speech and syntactic structure, emphasizing probabilistic and discriminative models. Research topics include:

Machine Translation

  • Language modeling, re-ordering models, phrase extraction techniques, syntactic methods, and better training for statistical machine translation:

Dialog and Speech Processing

The History of Computational Linguistics

Unsupervised and Semisupervised Learning of Linguistic Structure

Multilingual NLP

Past Projects

Links

  • Computational Linguistics @ Stanford
  • Other groups at Stanford doing NLP-related research:

  • The Computational Semantics Lab at CSLI
  • The LinGO/LKB project at CSLI
  • Martin Kay

  • Other links:

  • Statistical NLP resources
  • Linguistics resources