Stanford NLP 07/06/2016
This year we have a true bumper crop of graduates from the NLP Group - ten people! We're sad to see them go but excited for all the wonderful things they're off to do. Thanks to them all for being a part of the group and for their amazing contributions! We asked all the graduates to give us a few words about what they did here and where they're headed - check it out!
My Ph.D. focused on natural language understanding. Early in the program, I worked on semantic parsing for temporal expressions, before moving on to relation extraction -- I was actively involved in Stanford's Knowledge Base Population efforts -- and textual entailment. My thesis work was on applying natural logic -- a formal logic over the syntax of natural language -- to large-scale open-domain question answering tasks. Now I'm off working on Eloquent Labs with my cofounder Keenon Werling, where we're building dialog AI for customer service.
My research has been on understanding and improving neural network models for encoding and reasoning with sentence meaning. In working on that, I've done a lot with the task of natural language inference (a.k.a. recognizing textual entailment), for which I led the creation of the Stanford NLI challenge dataset last spring. After Stanford, I'll be starting as an assistant professor in the Department of Linguistics and the Center for Data Science at NYU.
At Stanford, I worked on temporal expression resolution (SUTime), entity linking, and text to 3D scene generation. At the moment I am working part time at Tableau Research on NLP for data visualization. This fall, I will be going to Princeton for a postdoc and will continue to work on projects connecting language with 3D scene understanding.
I spent the first half of my PhD wandering around, thinking of dropping out, and working on various research areas such as parsing, psycholinguistics, and word embedding learning. Then I fell in love with deep learning models, specifically neural machine translation which I wrote a thesis about. I'll be joining the Google Brain team and am excited to help contribute my part towards the future of AI.
At Stanford, I got a PhD in Linguistics and worked on dependency syntax for NLP, as a core contributor to the Universal Dependencies project. My focus is on how linguists' theoretical knowledge of syntax interacts with practical constraints for representing syntax for NLP applications, and how representation choices affect entire NLP pipelines. Next, I'm joining a research team at Apple.
I spent 2 quarters improving CodaLab and 1 quarter working on a reading comprehension project. I'm going back to Google where I'll be working on Google Now to improve the personalized news recommendations.
At Stanford I did work on lexical semantics and word embedding evaluation, and I'm now a software engineer at Google.
At Stanford, I worked on multilingual representation learning and neural machine translation. I'm spending the year 2016-17 at Google Brain before joining CMU's PhD program in Fall 2017.
At Stanford, I worked on applying deep learning methods to relation extraction and knowledge base population. I am now a research scientist at MetaMind/Salesforce.
I did research on semantic parsing and human-in-the-loop systems while at Stanford, and absolutely loved it. Now I'm off working on Eloquent Labs with my cofounder Gabor Angeli, where we're building dialog AI for customer service.