Deep Learning in Natural Language Processing


Deep learning has recently shown much promise for NLP applications. Traditionally, in most NLP approaches, documents or sentences are represented by a sparse bag-of-words representation. There is now a lot of work, including at Stanford, which goes beyond this by adopting a distributed representation of words, by constructing a so-called "neural embedding" or vector space representation of each word or document. Beyond this, Stanford work at the intersection of deep learning and natural language processing has in particular aimed at handling variable-sized sentences in a natural way, by capturing the recursive nature of natural language. We explore recursive neural networks for parsing, paraphrase detection of short phrases and longer sentences, sentiment analysis, machine translation, and natural language inference. Our approaches go beyond learning word vectors and also learn vector representations for multi-word phrases, grammatical relations, and bilingual phrase pairs, all of which are useful for various NLP applications.





There's a separate page for our tutorial on Deep Learning for NLP.

Contact Information

For any comments or questions, please feel free to email danqi at cs dot stanford dot edu.