Christopher Manning

Thomas M. Siebel Professor in Machine Learning, Professor of Linguistics and of Computer Science
Director, Stanford Artificial Intelligence Laboratory (SAIL)
Associate Director, Stanford Institute for Human-Centered Artificial Intelligence (HAI)

Stanford NLP Group, Stanford AI Lab, HAI, Linguistics and Computer Science, Stanford University

What's New?

Bio

Christopher Manning is the inaugural Thomas M. Siebel Professor in Machine Learning in the Departments of Linguistics and Computer Science at Stanford University, Director of the Stanford Artificial Intelligence Laboratory (SAIL), and an Associate Director of the Stanford Institute for Human-Centered Artificial Intelligence (HAI). From 2010, Manning pioneered Natural Language Understanding and Inference using Deep Learning, with impactful research on sentiment analysis, paraphrase detection, the GloVe model of word vectors, attention, neural machine translation, question answering, self-supervised model pre-training, tree-recursive neural networks, machine reasoning, summarization, and dependency parsing, work for which he has received two ACL Test of Time Awards and the IEEE John von Neumann Medal (2024). He earlier led the development of empirical, probabilistic approaches to NLP, computational linguistics, and language understanding, defining and building theories and systems for natural language inference, syntactic parsing, machine translation, and multilingual language processing, work for which he won ACL, Coling, EMNLP, and CHI Best Paper Awards. In NLP education, Manning coauthored foundational textbooks on statistical NLP (Manning and Schütze 1999) and information retrieval (Manning, Raghavan, and Schütze, 2008), and his online CS224N Natural Language Processing with Deep Learning course videos have been watched by hundreds of thousands. In linguistics, Manning is a principal developer of Stanford Dependencies and Universal Dependencies, and has authored monographs on ergativity and complex predicates. He is the founder of the Stanford NLP group (@stanfordnlp) and was an early proponent of open source software in NLP with Stanford CoreNLP and Stanza. He is an ACM Fellow, a AAAI Fellow, and an ACL Fellow, and was President of the ACL in 2015. Manning earned a B.A. (Hons) from The Australian National University, a Ph.D. from Stanford in 1994, and an Honorary Doctorate from U. Amsterdam in 2023. He held faculty positions at Carnegie Mellon University and the University of Sydney before returning to Stanford.

Contact

M Dept of Computer Science, Gates Building 3A, 353 Jane Stanford Way, Stanford CA 94305-9030, USA
E manning@cs.stanford.edu
T @chrmanning
W +1 (650) 723-7683
F +1 (650) 725-1449
R Gates 348
O Contact Suzanne
A Suzanne Lessard, Gates 232, +1 (650) 723-6319 slessard@stanford.edu

Brief CV

Papers

Here is my old publications list. However, I've become lazy, so you're more likely to find recent stuff on Google Scholar, Semantic Scholar, or the NLP Group publications page.

Books

Introduction to Information Retrieval, with Hinrich Schütze and Prabhakar Raghavan (Cambridge University Press, 2008). Manning and Schütze, Foundations of Statistical Natural Language Processing (MIT Press, 1999). Andrews and Manning, Complex Predicates and Information Spreading in LFG (1999). Ergativity: Argument Structure and Grammatical Relations (1996).

Conferences and Talks

Some of my talks are available online.

Students

I have a page listing all my Ph.D. graduates. You can find all my current students on the Stanford NLP Group People page.

Research Projects

The general area of my research is robust but linguistically sophisticated natural language understanding and generation, and opportunities to use it in real-world domains. Particular current topics include deep learning for NLP, compositionality, question answering, large pre-trained language models, knowledge and reasoning, Universal Dependencies, and low-resource languages. To find out more about what I do, it's best to look at my papers.

Courses

Online videos! You can find complete videos for several NLP courses that I have (co-)taught online:

In Autum 2022 and 2024, I taught Linguistics 200: Foundations of Linguistic Theory. This is a class for Linguistics Ph.D. students, aimed at giving them a richer, broad appreciation of the development of linguistic thinking.

Nearly every year since 2000, I teach CS 224N / Ling 284. Natural Language Processing with Deep Learning.

In Fall 2016, I taught Linguistics 278: Programming for linguists (and any other digital humanities or text-oriented social science students who think it might be a good match), mainly using Jupyter notebooks.

From 2003 through 2019, I taught CS 276: Information Retrieval and Web Search, in recent years with Pandu Nayak. Earlier versions of this course include two years of two-quarter sequences CS276A/B on information retrieval and text information classification and extraction, broadly construed ("IR++"): Fall quarter course website, Winter quarter course website. Early versions of this course were co-taught by me, Prabhakar Raghavan, and Hinrich Schütze.

I co-taught tutorials on Deep Learning for NLP at ACL 2012 with Yoshua Bengio and Richard Socher, and at NAACL 2013 with Richard Socher. Slides, references, and videos are available.

In June 2011, I taught a tutorial Natural Language Processing Tools for the Digital Humanities at Digital Humanities 2011 at Stanford.

In fall 2007 I taught Ling 289: Quantitative and Probabilistic Explanation in Linguistics MW 2:15-3:45 in 160-318. I previously taught it in winter 2002 (née Ling 236) and Winter 2005 (as Ling 235).

In the summer of 2007, I taught at the LSA Linguistic Institute: Statistical Parsing and Computational Linguistics in Industry.

In fall 1999 and winter 2001, I taught CS 121 Artificial Intelligence. The text book was S. Russell and P. Norvig, Artificial Intelligence: A Modern Approach.

I ran the NLP Reading Group from 1999-2002. The NLP Reading Group is now student organized.

Other stuff

LaTeX: When I used to have more time (i.e., when I was a grad student), I used to spend some of it writing (La)TeX macros. [Actually, that's a lie; I still spend some time doing it....]

We've got two kids: Joel [linkedin, github] and Casey [linkedin, github]. Here are my (aging) opinions on books for kids.


http://nlp.stanford.edu/~manning/
Christopher Manning <manning@cs.stanford.edu>. Hand-rolled HTML. Last modified: 2024-11-14.