Deep Learning in Natural Language Processing


Deep learning has recently shown much promise for NLP applications. Unlike most approaches in which documents or sentences are represented by a sparse bag-of-words vector, Stanford work at the intersection of deep learning and natural language processing handles variable sized sentences in a natural way and captures the recursive nature of natural language. We explore recursive neural networks for parsing, paraphrase detection of short phrases and longer sentences and sentiment analysis. Our approaches go beyong learning word vectors and instead learn vector representations for multi-word phrases which are useful for different applications.




There's a separate page for our tutorial on Deep Learning for NLP.

Contact Information

For any comments or questions, please feel free to email Richard at Socher . org