Christopher Manning is the inaugural Thomas M. Siebel Professor in Machine Learning in the Departments of Linguistics and Computer Science at Stanford University, Director of the Stanford Artificial Intelligence Laboratory (SAIL), and an Associate Director of the Stanford Institute for Human-Centered Artificial Intelligence (HAI). His research goal is computers that can intelligently process, understand, and generate human languages. Manning was an early leader in applying Deep Learning to Natural Language Processing (NLP), with well-known research on the GloVe model of word vectors, attention, machine translation, question answering, self-supervised model pre-training, tree-recursive neural networks, machine reasoning, dependency parsing, sentiment analysis, and summarization. He also focuses on computational linguistic approaches to parsing, natural language inference and multilingual language processing, including being a principal developer of Stanford Dependencies and Universal Dependencies. Manning has coauthored leading textbooks on statistical approaches to NLP (Manning and Schütze 1999) and information retrieval (Manning, Raghavan, and Schütze, 2008), as well as linguistic monographs on ergativity and complex predicates. His online CS224N Natural Language Processing with Deep Learning videos have been watched by hundreds of thousands of people. He is an ACM Fellow, a AAAI Fellow, and an ACL Fellow, and a Past President of the ACL (2015). His research has won ACL, Coling, EMNLP, and CHI Best Paper Awards, and an ACL Test of Time Award. He has a B.A. (Hons) from The Australian National University and a Ph.D. from Stanford in 1994, and an Honorary Doctorate from U. Amsterdam in 2023, and he held faculty positions at Carnegie Mellon University and the University of Sydney before returning to Stanford. He is the founder of the Stanford NLP group (@stanfordnlp) and manages development of the Stanford CoreNLP and Stanza software.
|M||Dept of Computer Science, Gates Building 3A, 353 Jane Stanford Way, Stanford CA 94305-9030, USA|
|W||+1 (650) 723-7683|
|F||+1 (650) 725-1449|
|O||For CS224N: Mon 2:45–5:00pm, book here; otherwise contact Suzanne|
|A||Suzanne Lessard, Gates 232, +1 (650) 723-6319 firstname.lastname@example.org|
Here is my publications list. However, I've become lazy, so you're more likely to find recent stuff on the NLP Group publications page, Google Scholar, or Semantic Scholar.
Introduction to Information Retrieval, with Hinrich Schütze and Prabhakar Raghavan (Cambridge University Press, 2008). Manning and Schütze, Foundations of Statistical Natural Language Processing (MIT Press, 1999). Andrews and Manning, Complex Predicates and Information Spreading in LFG (1999). Ergativity: Argument Structure and Grammatical Relations (1996).
Some of my talks are available online.
In 2013, I was program co-chair for the first International Conference on Learning Representations (see: ICLR 2013). The 2013 edition was a really fun workshop-scale event. Since then, ICLR has grown in size exponentially.
In 2013, I helped organize the first CVSC workshop. It was a really lively workshop. I also helped organize a second Workshop on Continuous Vector Space Models and their Compositionality at EACL 2014. I helped organize a Workshop on Interactive Language Learning, Visualization, and Interfaces to be held at ACL 2014, trying to build an interdisciplinary community interested in the intersection of NLP, HCI, and data visualization.
I have a page listing all my Ph.D. graduates. You can find all my current students on the Stanford NLP Group People page.
The general area of my research is robust but linguistically sophisticated natural language understanding and generation, and opportunities to use it in real-world domains. Particular current topics include deep learning for NLP, compositionality, question answering, large pre-trained language models, knowledge and reasoning, Universal Dependencies, and low-resource languages. To find out more about what I do, it's best to look at my papers.
Online videos! You can find complete videos for several NLP courses that I have (co-)taught online:
In Autum 2022, I taught Linguistics 200: Foundations of Linguistic Theory. This is a class for Linguistics Ph.D. students, aimed at giving them a richer, broad appreciation of the development of linguistic thinking.
Nearly every year since 2000, I teach CS 224N / Ling 284. Natural Language Processing with Deep Learning
In Fall 2016, I taught Linguistics 278: Programming for linguists (and any other digital humanities or text-oriented social science students who think it might be a good match), mainly using Jupyter notebooks.
From 2003 through 2019, I taught CS 276: Information Retrieval and Web Search, in recent years with Pandu Nayak. Earlier versions of this course include two years of two-quarter sequences CS276A/B on information retrieval and text information classification and extraction, broadly construed ("IR++"): Fall quarter course website, Winter quarter course website. Early versions of this course were co-taught by me, Prabhakar Raghavan, and Hinrich Schütze.
I co-taught tutorials on Deep Learning for NLP at ACL 2012 with Yoshua Bengio and Richard Socher, and at NAACL 2013 with Richard Socher. Slides, references, and videos are available.
In June 2011, I taught a tutorial Natural Language Processing Tools for the Digital Humanities at Digital Humanities 2011 at Stanford.
In fall 2007 I taught Ling 289: Quantitative and Probabilistic Explanation in Linguistics MW 2:15-3:45 in 160-318. I previously taught it in winter 2002 (née Ling 236) and Winter 2005 (as Ling 235).
In the summer of 2007, I taught at the LSA Linguistic Institute: Statistical Parsing and Computational Linguistics in Industry.
In fall 1999 and winter 2001, I taught CS 121 Artificial Intelligence. The text book was S. Russell and P. Norvig, Artificial Intelligence: A Modern Approach.
I ran the NLP Reading Group from 1999-2002. The NLP Reading Group is now student organized.
LaTeX: When I used to have more time (i.e., when I was a grad student), I used to spend some of it writing (La)TeX macros. [Actually, that's a lie; I still spend some time doing it....]
We've got two kids: Joel [linkedin, github] and Casey [linkedin, github]. Here are my (aging) opinions on books for kids.