Research Blog

Reading Group Blog -- Semantically Equivalent Adversarial Rules for Debugging NLP Models (ACL 2018)

This post is the second blog post for the papers that we read at the Stanford NLP Reading Group. We will discuss work by Ribeiro et al., which proposes a systematic method to discover adversarial examples in text. These examples are semantically equivalent to the original example but perturbed so that the model will predict differently. We walk through the method of generation, including the human-in-the-loop evaluation and discuss its advantage over other related work.

Posted on 02/19/2019 by Allen Nie




Reading Group Blog -- LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modeling Structure Makes Them Better (ACL 2018)

This post is the first installment of a series where we blog about papers we read at the Stanford NLP Reading Group. First up, we have "LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modeling Structure Makes Them Better," from Kuncoro et al. at ACL 2018. This paper analyzes the extent to which neural language models learn about syntactic structure; while plain LSTMs do quite well, RNN Grammars turn out to work even better. The paper extends the work of Linzen et al. from TACL 2016 ("Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies"), which we also discuss in this post.

Posted on 01/25/2019 by Robin Jia




A New Multi-Turn, Multi-Domain, Task-Oriented Dialogue Dataset

Data scarcity is a major issue in dialogue research. To help alleviate this problem, we release a new dataset of 3,000 dialogues in the virtual car-assistant space. The dataset covers three distinct domains: calendar scheduling, weather information retrieval, and point-of-interest navigation.

Posted on 07/03/2017 by Mihail Eric







Interactive Language Learning

We introduce a new framework for learning natural language through interaction in two different domains: a blocks world and a calendar setting. We discuss recent advancements and preliminary empirical results.

Posted on 12/14/2016 by Nadav Lidor, Sida I. Wang




In Their Own Words: The 2016 Graduates of the Stanford NLP Group

This year we have a true bumper crop of graduates from the NLP Group - ten people! We're sad to see them go but excited for all the wonderful things they're off to do. Thanks to them all for being a part of the group and for their amazing contributions! We asked all the graduates to give us a few words about what they did here and where they're headed - check it out!

Posted on 07/06/2016 by Stanford NLP




Hybrid tree-sequence neural networks with SPINN

The SPINN model, recently published by a team from the NLP Group, is a strong neural network model for language understanding. In this post I analyze SPINN as a hybrid tree-sequence model, merging recurrent and recursive neural networks into a single paradigm.

Posted on 06/23/2016 by Jon Gauthier




How to help someone feel better: NLP for mental health

We apply NLP to mental health by doing a large-scale analysis of text-message-based crisis counseling conversations. We develop a set of discourse analysis methods to measure how various linguistic aspects of the conversations are correlated with conversation outcomes. Using these, we discover actionable conversation strategies that are associated with successful counseling.

Posted on 05/25/2016 by Kevin Clark and Tim Althoff







WikiTableQuestions: a Complex Real-World Question Understanding Dataset

To truly understand natural language questions, an AI system has to grapple with the diversity of question topics (breadth) and the linguistic complexity of the questions (depth). We are releasing WikiTableQuestions, a dataset of complex questions on real-world tables, which addresses both challenges of breadth and depth.

Posted on 02/11/2016 by Ice Pasupat




The Stanford NLI Corpus Revisited

Last September at EMNLP 2015, we released the Stanford Natural Language Inference (SNLI) Corpus. We're still excitedly working to build bigger and better machine learning models to use it to its full potential, and we sense that we're not alone, so we're using the launch of the lab's new website to share a bit of what we've learned about the corpus over the last few months.

Posted on 01/25/2016 by Sam Bowman




Welcome to the new Stanford NLP Research Blog

This page will hold the research blog for the Stanford Natural Language Processing group. Here group members will post descriptions of their research, tutorials, and other interesting tidbits. No posts yet, but stay tuned!

Posted on 01/14/2016 by Rob Voigt