Christopher Manning is the inaugural Thomas M. Siebel Professor in Machine Learning in the Departments of Computer Science and Linguistics at Stanford University and Director of the Stanford Artificial Intelligence Laboratory (SAIL). His research goal is computers that can intelligently process, understand, and generate human language material. Manning is a leader in applying Deep Learning to Natural Language Processing, with well-known research on Tree Recursive Neural Networks, the GloVe model of word vectors, sentiment analysis, neural network dependency parsing, neural machine translation, question answering, and deep language understanding. He also focuses on computational linguistic approaches to parsing, natural language inference and multilingual language processing, including being a principal developer of Stanford Dependencies and Universal Dependencies. Manning has coauthored leading textbooks on statistical approaches to Natural Language Processing (NLP) (Manning and Schütze 1999) and information retrieval (Manning, Raghavan, and Schütze, 2008), as well as linguistic monographs on ergativity and complex predicates. He is an ACM Fellow, a AAAI Fellow, and an ACL Fellow, and a Past President of the ACL (2015). His research has won ACL, Coling, EMNLP, and CHI Best Paper Awards. He has a B.A. (Hons) from The Australian National University and a Ph.D. from Stanford in 1994, and he held faculty positions at Carnegie Mellon University and the University of Sydney before returning to Stanford. He is the founder of the Stanford NLP group (@stanfordnlp) and manages development of the Stanford CoreNLP software.
|M||Dept of Computer Science, Gates Building 2A, 353 Serra Mall, Stanford CA 94305-9020, USA|
|W||+1 (650) 723-7683|
|F||+1 (650) 725-1449|
|A||Suzanne Lessard, Gates 148, +1 (650) 723-6319 firstname.lastname@example.org|
Here is my publications list. However, I've become lazy, so for recent stuff, you're more likely to find it on the NLP Group publications page. Or things might be more up-to-date at Google Scholar, Semantic Scholar, or Microsoft Academic.
Introduction to Information Retrieval, with Hinrich Schütze and Prabhakar Raghavan (Cambridge University Press, 2008). Manning and Schütze, Foundations of Statistical Natural Language Processing (MIT Press, 1999). Complex Predicates and Information Spreading in LFG (1999). Ergativity: Argument Structure and Grammatical Relations (1996).
Some of my talks are available online.
In 2013, I was program co-chair for the first International Conference on Learning Representations (see: ICLR 2013). The 2013 edition was a really fun workshop-scale event. Since then, ICLR has grown in size exponentially.
In 2013, I helped organize the first CVSC workshop. It was a really lively workshop. I also helped organize a second Workshop on Continuous Vector Space Models and their Compositionality at EACL 2014.
I helped organize a Workshop on Interactive Language Learning, Visualization, and Interfaces to be held at ACL 2014, trying to build an interdisciplinary community interested in the intersection of NLP, HCI, and data visualization.
I made a page listing all my Ph.D. graduates. You can find all my current students on the Stanford NLP Group People page.
The general area of my research is robust but linguistically sophisticated natural language understanding, and opportunities to use it in real-world domains. Particular current topics include deep learning for NLP, question answering, reading comprehension, knowledge and reasoning Universal Dependencies and dependency parsing, and language learning through interaction.
I am interested in new students, at or accepted to Stanford, wanting to work in the area of Natural Language Processing. To find out more about what I do, it's best to look at my papers, or my group research page.
Online videos! You can find complete videos for four courses on NLP that I have (co-)taught online:
In Fall 2016, I taught Linguistics 278: Programming for linguists (and any other digital humanities or text-oriented social science students who think it might be a good match), mainly using Jupyter notebooks.
I co-taught tutorials on Deep Learning for NLP at ACL 2012 with Yoshua Bengio and Richard Socher, and at NAACL 2013 with Richard Socher. Slides, references, and videos are available.
In 2012, I co-taught a free online course on Natural Language Processing, one of the earliest MOOCs on Coursera, with Dan Jurafsky. We haven't found the time to revise it and teach a second version, but you can watch all the videos (see above).
In June 2011, I taught a tutorial Natural Language Processing Tools for the Digital Humanities at Digital Humanities 2011 at Stanford.
Nearly every year, I teach CS 276: Information Retrieval and Web Search, with Pandu Nayak. Earlier versions of this course include two years of two-quarter sequences CS276A/B on information retrieval and text information classification and extraction, broadly construed ("IR++"): Fall quarter course website. Winter quarter course website. This course started in 2001. Early versions were also co-taught by me, Prabhakar Raghavan, and Hinrich Schütze.
Nearly every year, I teach CS 224N / Ling 237. Natural Language Processing -- Develops an in-depth understanding of both the algorithms available for the processing of linguistic information and the underlying computational properties of natural languages. Morphological, syntactic, and semantic processing from both a linguistic and an algorithmic perspective. Focus on modern quantitative techniques in NLP: using large corpora, statistical models for acquisition, disambiguation, and parsing. Examination and construction of representative systems. Prerequisites: 121/221 or Ling 138/238, and programming experience. Recommended: basic familiarity with logic and probability. 3 units. I've taught this course yearly since Spr 2000. Many previous student projects are available online.
In fall 2007 I taught Ling 289: Quantitative and Probabilistic Explanation in Linguistics MW 2:15-3:45 in 160-318. I previously taught it in winter 2002 (née Ling 236) and Winter 2005 (as Ling 235).
In the summer of 2007, I taught at the LSA Linguistic Institute: Statistical Parsing and Computational Linguistics in Industry.
In fall 1999 and winter 2001, I taught CS 121 Artificial Intelligence. The text book was S. Russell and P. Norvig, Artificial Intelligence: A Modern Approach.
I ran the NLP Reading Group from 1999-2002. The NLP Reading Group is now student organized.
LaTeX: When I used to have more time (i.e., when I was a grad student), I used to spend some of it writing (La)TeX macros. [Actually, that's a lie; I still spend some time doing it....]
We've got two sons: Joel [linkedin, github] and Casey [linkedin, github]. Here are my opinions on books for kids.