Research Blog







Interactive Language Learning

We introduce a new framework for learning natural language through interaction in two different domains: a blocks world and a calendar setting. We discuss recent advancements and preliminary empirical results.

Posted on 12/14/2016 by Nadav Lidor, Sida I. Wang




In Their Own Words: The 2016 Graduates of the Stanford NLP Group

This year we have a true bumper crop of graduates from the NLP Group - ten people! We're sad to see them go but excited for all the wonderful things they're off to do. Thanks to them all for being a part of the group and for their amazing contributions! We asked all the graduates to give us a few words about what they did here and where they're headed - check it out!

Posted on 07/06/2016 by Stanford NLP




Hybrid tree-sequence neural networks with SPINN

The SPINN model, recently published by a team from the NLP Group, is a strong neural network model for language understanding. In this post I analyze SPINN as a hybrid tree-sequence model, merging recurrent and recursive neural networks into a single paradigm.

Posted on 06/23/2016 by Jon Gauthier




How to help someone feel better: NLP for mental health

We apply NLP to mental health by doing a large-scale analysis of text-message-based crisis counseling conversations. We develop a set of discourse analysis methods to measure how various linguistic aspects of the conversations are correlated with conversation outcomes. Using these, we discover actionable conversation strategies that are associated with successful counseling.

Posted on 05/25/2016 by Kevin Clark and Tim Althoff







WikiTableQuestions: a Complex Real-World Question Understanding Dataset

To truly understand natural language questions, an AI system has to grapple with the diversity of question topics (breadth) and the linguistic complexity of the questions (depth). We are releasing WikiTableQuestions, a dataset of complex questions on real-world tables, which addresses both challenges of breadth and depth.

Posted on 02/11/2016 by Ice Pasupat




The Stanford NLI Corpus Revisited

Last September at EMNLP 2015, we released the Stanford Natural Language Inference (SNLI) Corpus. We're still excitedly working to build bigger and better machine learning models to use it to its full potential, and we sense that we're not alone, so we're using the launch of the lab's new website to share a bit of what we've learned about the corpus over the last few months.

Posted on 01/25/2016 by Sam Bowman




Welcome to the new Stanford NLP Research Blog

This page will hold the research blog for the Stanford Natural Language Processing group. Here group members will post descriptions of their research, tutorials, and other interesting tidbits. No posts yet, but stay tuned!

Posted on 01/14/2016 by Rob Voigt