Christopher Manning is a professor of computer science and linguistics at Stanford University. His Ph.D. is from Stanford in 1995, and he held faculty positions at Carnegie Mellon University and the University of Sydney before returning to Stanford. His research goal is computers that can intelligently process, understand, and generate human language material. Manning concentrates on machine learning approaches to computational linguistic problems, including syntactic parsing, computational semantics and pragmatics, textual inference, machine translation, and recursive deep learning for NLP. He is an ACM Fellow, a AAAI Fellow, and an ACL Fellow, and has coauthored leading textbooks on statistical natural language processing and information retrieval. He is a member of the Stanford NLP group (@stanfordnlp).
|M||Dept of Computer Science, Gates Building 1A, 353 Serra Mall, Stanford CA 94305-9020, USA|
|W||+1 (650) 723-7683|
|F||+1 (650) 725-1449|
|O||This quarter by appointment, please email|
Most of my papers are available online in my publication list. Or things might be more up-to-date at Google Scholar or Microsoft Academic Search.
Introduction to Information Retrieval, with Hinrich Schütze and Prabhakar Raghavan (Cambridge University Press, 2008). Manning and Schütze, Foundations of Statistical Natural Language Processing (MIT Press, 1999). Complex Predicates and Information Spreading in LFG (1999). Ergativity: Argument Structure and Grammatical Relations (1996).
A few of my talks are available online.
In 2013, I was one of the program co-chairs for the new International Conference on Learning Representations (ICLR 2013). It was a really good workshop-scale event. You can now get information on ICLR 2014.
In 2013, I helped organize the first CVSC workshop. It was a really lively workshop. We're organizing a second Workshop on Continuous Vector Space Models and their Compositionality at EACL 2014.
I'm helping organize a Workshop on Interactive Language Learning, Visualization, and Interfaces to be held at ACL 2014. We’re trying to build an interdisciplinary community interested in the intersection of NLP, HCI, and data visualization. Join us!
My current research focuses on robust but linguistically sophisticated probabilistic natural language processing, and opportunities to use it in real-world domains. Particularly topics include richer models for probabilistic parsing, computational semantics, machine translation, grammar induction, text categorization and clustering, incorporating probabilistic models into syntactic theories, electronic dictionaries and their usability, particularly for indigenous languages, information extraction and presentation, and linguistic typology.
My research at Stanford is currently supported by an IBM Faculty Partnership Award, IARPA, DARPA, and the NSF.
I am interested in new students wanting to work in the area of Natural Language Processing. To find out more about what I do, it's best to look at my papers, or my group research page. However, to cut down on my email load, it's necessary to put in some more information:
I co-taught tutorials on Deep Learning for NLP at ACL 2012 with Yoshua Bengio and Richard Socher, and at NAACL 2013 with Richard Socher. Slides, references, and videos are available.
Starting January 2012, I'll be teaching a free online course on Natural Language Processing with Dan Jurafsky.
In June 2011, I taught a tutorial Natural Language Processing Tools for the Digital Humanities at Digital Humanities 2011 at Stanford.
In fall 2008, I will again teach CS 276: Information Retrieval and Web Search, with Prabhakar Raghavan Earlier versions of this course include two years of two-quarter sequences CS276A/B on information retrieval and text information classification and extraction, broadly construed ("IR++"): Fall quarter course website. Winter quarter course website. This course started in 2001. Early versions were also co-taught by Hinrich Schütze.
In spring 2009, I will again teach CS 224N / Ling 237. Natural Language Processing -- Develops an in-depth understanding of both the algorithms available for the processing of linguistic information and the underlying computational properties of natural languages. Morphological, syntactic, and semantic processing from both a linguistic and an algorithmic perspective. Focus on modern quantitative techniques in NLP: using large corpora, statistical models for acquisition, disambiguation, and parsing. Examination and construction of representative systems. Prerequisites: 121/221 or Ling 138/238, and programming experience. Recommended: basic familiarity with logic and probability. 3 units. I've taught this course yearly since Spr 2000. Many previous student projects are available online.
In fall 2007 I taught Ling 289: Quantitative and Probabilistic Explanation in Linguistics MW 2:15-3:45 in 160-318. I previously taught it in winter 2002 (née Ling 236) and Winter 2005 (as Ling 235).
In the summer of 2007, I taught at the LSA Linguistic Institute: Statistical Parsing and Computational Linguistics in Industry.
In fall 1999 and winter 2001, I taught CS 121 Artificial Intelligence. The text book was S. Russell and P. Norvig, Artificial Intelligence: A Modern Approach.
The NLP Reading Group (which I ran 1999-2002) has now been transformed into The Natural Language and Speech Processing Colloquium.
LaTeX: When I used to have more time (i.e., when I was a grad student), I used to spend some of it writing (La)TeX macros. [Actually, that's a lie; I still spend some time doing it....]
We've got two sons: Joel and Casey. Here are my opinions on books for the very young.