Bios for Christopher Manning

Stanford Webpage 1999

Chris Manning works on systems and formalisms that can intelligently process and produce human languages. Particular research interests include probabilistic models of language and statistical natural language processing, text understanding and mining, constraint-based theories of grammar (HPSG and LFG), computational lexicography (involving work in XML, XSL, and information visualization), information extraction, and syntactic typology.

Brief Bio

Stanford SoE Faculty and Resource Guide 1999

NAME: Christopher D. Manning
TITLE: Assistant Professor of Computer Science and Linguistics
AREA OF INTEREST: Human Language Technology / Natural Language Processing
BRIEF DESCRIPTION: Manning works on systems that can intelligently process and produce human languages. Particular research interests include probabilistic models of language and statistical natural language processing, constraint-based theories of grammar (HPSG and LFG), computational lexicography, and information extraction. Ph.D. Stanford 1994.

Computer Forum, 1999

Christopher Manning, Assistant Professor of Computer Science and Linguistics works on systems and formalisms that can intelligently process and produce human languages. His research interests range from applying statistical natural language processing techniques to problems of information retrieval, information extraction, text data mining, and computational lexicography through building probabilistic models of language phenomena to constraint-based theories of grammar (HPSG and LFG), and their use in explaining grammatical structures and their variation across languages.

For AAAI-2000 tutorial on Statistical NLP

Christopher Manning is assistant professor of computer science and linguistics at Stanford University. Previously, he held faculty positions at Carnegie Mellon University and the University of Sydney. His research interests include statistical models of language, information extraction, and computational lexicography. He is co-author of Foundations of Statistical Natural Language Processing (MIT Press, 1999).

CSLI Industrial Affiliate Program 2000

Christopher Manning is Assistant Professor of Computer Science and Linguistics. He works primarily on systems that can intelligently process and produce human languages. Particular research interests include probabilistic models of language and statistical natural language processing, constraint-based theories of grammar (HPSG and LFG), computational lexicography, information extraction, and topics in linguistic typology, including argument structure, serial verbs, causatives, and ergativity.

For AI broad area colloq 2000

Christopher Manning is an Assistant Professor of Computer Science and Linguistics at Stanford University. His research interests include probabilistic models of language and statistical natural language processing, constraint-based theories of grammar (HPSG and LFG), computational lexicography, and syntactic typology. He received his Ph.D. in linguistics from Stanford University in 1994. From 1994-1996, he was on the faculty of the Computational Linguistics Program at Carnegie Mellon University, and from 1996-1999 he was "back home" at the University of Sydney, before returning to Stanford at the start of this academic year. His most recent book is Foundations of Statistical Natural Language Processing (MIT Press, 1999, with Hinrich Schuetze).

For AI intro, 2000 CS admits weekend

Chris Manning works on systems that can intelligently process and produce human languages. Particular research interests include probabilistic models of language and statistical natural language processing, constraint-based theories of grammar (HPSG and LFG), computational lexicography, information extraction and text mining, and topics in syntactic theory.

For 9th Logic, Language, and Computation conference, 2000

Christopher Manning is assistant professor of computer science and linguistics at Stanford University. His research interests include probabilistic models of language and statistical natural language processing, constraint-based theories of grammar (HPSG and LFG), computational lexicography, information extraction, and topics in linguistic typology.

For Australian Linguistic Institute brochure

Christopher Manning received his BA (Hons) from the Australian National University and then a Ph.D. from Stanford University in 1994. Since then he has held faculty positions in the Computational Linguistics Program (Philosophy Dept) at Carnegie Mellon University, the Linguistics Department at the University of Sydney, and since September 1999 he is Assistant Professor of Computer Science and Linguistics at Stanford University. His research interests include probabilistic models of language and statistical natural language processing, constraint-based theories of grammar (HPSG and LFG), computational lexicography, semi-structured data and XML, information extraction, and topics in syntactic typology including ergativity and argument structure. For the last two years, he has been working with Jane Simpson and other colleagues on projects in computational lexicography and dictionary usability, focussing particularly on Australian languages. As well as various articles and book chapters, he is author or co-author of three books, Ergativity: Argument Structure and Grammatical Relations (CSLI Publications, 1996), Complex Predicates and Information Spreading in LFG (CSLI Publications, 1999, with Avery Andrews), and Foundations of Statistical Natural Language Processing (MIT Press, 1999, with Hinrich Schuetze).

For Biomedical Informatics brochure, 2000

CHRISTOPHER MANNING, Ph.D., Assistant Professor of Computer Science and Linguistics Christopher Manning works on systems that can intelligently process and produce human languages. Particular research interests include probabilistic models of language and statistical natural language processing, constraint-based theories of grammar (HPSG and LFG), parsing systems, computational lexicography, information extraction and text mining, and topics in syntactic theory and typology.

For CMU LTI talk, 2000

Christopher Manning is assistant professor of computer science and linguistics at Stanford University. Previously, he held faculty positions at Carnegie Mellon University and the University of Sydney. His research interests include statistical models of language, information extraction, and computational lexicography. He is co-author of Foundations of Statistical Natural Language Processing (MIT Press, 1999).

Bio for company, 2000

Christopher Manning is the only faculty member at Stanford University with appointments in both the Computer Science and Linguistics departments. He works on systems and formalisms that can intelligently process and produce human languages. Particular research interests include probabilistic models of language and statistical natural language processing, text understanding and text mining, constraint-based theories of grammar (HPSG and LFG), computational lexicography (involving work in XML, XSL, and information visualization), information extraction, and syntactic typology. Chris received his BA (Hons) from the Australian National University, in mathematics, computer science and linguistics; and his PhD from Stanford in Linguistics. Prior to joining the Stanford faculty, he held faculty positions at Carnegie Mellon University and the University of Sydney. He is the author or coauthor of three books including Foundations of Statistical Natural Language Processing.

Bio for DB Seminar, Sep 2000

Christopher Manning is assistant professor of computer science and linguistics at Stanford University. Previously, he held faculty positions at Carnegie Mellon University and the University of Sydney. His research interests include probabilistic models of language and statistical natural language processing, constraint-based theories of grammar, parsing systems, computational lexicography, information extraction and text mining, and topics in syntactic theory and typology.

Five line bio for talk at company, Oct 2000

Christopher Manning is assistant professor of computer science and linguistics at Stanford University. Previously, he has held faculty positions at Carnegie Mellon University and the University of Sydney. His research interests include statistical models of language, syntax, information extraction, and computational lexicography.

Bio for CSLI IAP Meeting, Nov 2000

Christopher Manning, Assistant Professor of Computer Science and Linguistics at Stanford University, works primarily on systems that can intelligently process and produce human languages. Particular research interests include probabilistic models of language and statistical natural language processing, constraint-based theories of grammar (HPSG and LFG), computational lexicography (working with XML, XSL, and information visualization), information extraction and text mining, and topics in syntax and cross-linguistic typology. His most recent book is Foundations of Statistical Natural Language Processing (MIT Press, 1999, with Hinrich Schütze).

Bio for NIPS 2001 tutorial, December 2001

Christopher Manning is an Assistant Professor of Computer Science and Linguistics at Stanford University. He received his Ph.D. from Stanford University in 1994, and served on the faculty of the Computational Linguistics Program at Carnegie Mellon University (1994-1996) and the University of Sydney Linguistics Department (1996-1999) before returning to Stanford. His research interests include probabilistic models of language, natural language parsing, constraint-based linguistic theories, syntactic typology, information extraction and text mining, and computational lexicography. He is the author of three books, including Foundations of Statistical Natural Language Processing (MIT Press, 1999, with Hinrich Schuetze).

Bio for MIT Press book, December 2001

Christopher Manning is an Assistant Professor of Computer Science and Linguistics at Stanford University. He received his Ph.D. from Stanford University in 1994, and served on the faculty of the Computational Linguistics Program at Carnegie Mellon University and the Linguistics Department at the University of Sydney before returning to Stanford. His research interests include probabilistic models of language, statistical natural language processing, constraint-based linguistic theories, syntactic typology, information extraction, text mining, and computational lexicography. He is the author of three books, including Foundations of Statistical Natural Language Processing (MIT Press, 1999, with Hinrich Schütze).

Bio for Berkeley/JHU talks, September 2002

Christopher Manning is an Assistant Professor of Computer Science and Linguistics at Stanford University. He received his Ph.D. from Stanford University in 1994, and served on the faculty of the Computational Linguistics Program at Carnegie Mellon University (1994-1996) and the University of Sydney Linguistics Department (1996-1999) before returning to Stanford. His research interests include probabilistic models of language, natural language parsing, constraint-based linguistic theories, syntactic typology, information extraction and text mining, and computational lexicography. He is the author of three books, including Foundations of Statistical Natural Language Processing (MIT Press, 1999, with Hinrich Schuetze).

Bio for ACL 2003 tutorial, March 2003

Christopher Manning is an assistant professor of computer science and linguistics at Stanford University. Previously, he has held faculty positions at Carnegie Mellon University and the University of Sydney. His research interests include probabilistic natural language processing, syntax, information extraction, and computational lexicography. He is the author of three books, including Foundations of Statistical Natural Language Processing (MIT Press, 1999, with Hinrich Schuetze).

Bio for Computer Forum 2003, April 2003

Christopher Manning is an assistant professor of computer science and linguistics at Stanford University. Previously, he has held faculty positions at Carnegie Mellon University and the University of Sydney. His research interests include probabilistic natural language processing, syntax, computational lexicography, information extraction and text mining. He is the author of three books, including Foundations of Statistical Natural Language Processing (MIT Press, 1999, with Hinrich Schuetze).

Bio for Pattern Recognition Journal 2004

Christopher Manning is an assistant professor of computer science and linguistics at Stanford University. Prior to this, he received his BA (Hons) from the Australian National University, his PhD from Stanford in 1994, and held faculty positions at Carnegie Mellon University and the University of Sydney. His research interests include probabilistic natural language processing, syntax, parsing, computational lexicography, information extraction and text mining. He is the author of three books, including the well-known text Foundations of Statistical Natural Language Processing (MIT Press, 1999, with Hinrich Schuetze).

Bio for KDD grant 2004

Christopher Manning is an Assistant Professor of Computer Science and Linguistics at Stanford University. He received his Ph.D. from Stanford University in 1994, and held faculty positions in the Computational Linguistics Program at Carnegie Mellon University (1994-1996) and in the University of Sydney Linguistics Department (1996-1999) before returning to Stanford. He is a Terman Fellow and recipient of an IBM Faculty Award. His recent work has concentrated on statistical parsing, grammar induction, and probabilistic approaches to problems such as word sense disambiguation, part-of-speech tagging, and named entity recognition, with an emphasis on complementing leading machine learning methods with use of rich linguistic features. Manning coauthored the leading textbook on statistical approaches to NLP (Manning and Schuetze 1999) and (with Dan Klein) received the best paper award at the Association for Computational Linguistics 2003 meeting for the paper Accurate Unlexicalized Parsing.

MSFT talk 2005

Christopher Manning is an Assistant Professor of Computer Science and Linguistics at Stanford University. He received his Ph.D. from Stanford University in 1994, and held faculty positions in the Computational Linguistics Program at Carnegie Mellon University (1994-1996) and in the University of Sydney Linguistics Department (1996-1999) before returning to Stanford. His recent work has concentrated on statistical parsing, grammar induction, and probabilistic approaches to problems such as part-of-speech tagging, named entity recognition, and learning semantic relations, with an emphasis on complementing leading machine learning methods with use of rich linguistic features. Manning coauthored the leading textbook on statistical approaches to NLP (Manning and Schuetze 1999) and with Dan Klein received the best paper award at the Association for Computational Linguistics 2003 meeting for the paper Accurate Unlexicalized Parsing.

Bio for MIT Talk, October 2005

Christopher Manning is an assistant professor of computer science and linguistics at Stanford University. Previously, he held faculty positions at Carnegie Mellon University and the University of Sydney. His research interests include probabilistic natural language parsing, syntax, information extraction and text mining. He is the author of three books, including Foundations of Statistical Natural Language Processing (MIT Press, 1999, with Hinrich Schuetze).

LSA Institute 2007

Christopher Manning is an Associate Professor of Linguistics and Computer Science at Stanford University. His recent work has concentrated on probabilistic approaches to NLP problems, particularly statistical parsing, robust textual inference, and grammar induction. His work emphasizes considering different languages and complementing leading machine learning methods with use of rich linguistic features. Manning coauthored the leading textbook on statistical approaches to NLP (Manning and Schuetze 1999) and has an in press book on Information Retrieval and Web Search (with Raghavan and Schuetze, CUP 2007). Together with Dan Klein, he received the best paper award at the Association for Computational Linguistics 2003 meeting for the paper Accurate Unlexicalized Parsing. His Ph.D. is from Stanford in 1994, and he held faculty positions at Carnegie Mellon University and the University of Sydney before returning to Stanford.

For TILR 2007 (Toward the Interoperability of Language Resources)

Christopher Manning is an Associate Professor of Linguistics and Computer Science at Stanford University. His recent work has concentrated on probabilistic approaches to NLP problems, particularly statistical parsing, robust textual inference, and grammar induction, but has also involved ongoing work on computational lexicography and dictionary usability, focussing particularly on Australian languages. Manning coauthored the leading textbook on statistical approaches to NLP (Manning and Schuetze 1999) and has an in press book on Information Retrieval and Web Search (with Raghavan and Schuetze, CUP 2008). Together with Dan Klein, he received the best paper award at the Association for Computational Linguistics 2003 meeting for the paper Accurate Unlexicalized Parsing. His Ph.D. is from Stanford in 1994, and he held faculty positions at Carnegie Mellon University and the University of Sydney before returning to Stanford.

For Computer Forum 2007

Professor Manning works on systems and formalisms that can intelligently process and produce human languages. Particular research interests include probabilistic models of language and statistical natural language processing, robust textual understanding and inference, named entity recognition, information extraction, and text mining, statistical parsing of various languages, constraint-based theories of grammar and probabilistic extensions of them, and computational lexicography (involving work in XML, XSL, and information visualization).

For letter 2007

I am an associate professor of computer science and linguistics at Stanford University. Previously, I graduated with a PhD from Stanford Linguistics in 1994, and then held faculty positions at Carnegie Mellon University and the University of Sydney. My research interests include probabilistic natural language parsing, statistical parsing, grammar induction and probabilistic approaches to information extraction, text mining, and linguistic questions. In general my work emphasizes complementing leading machine learning methods with use of rich linguistic features. I am the author of three published books, including \emph{Foundations of Statistical Natural Language Processing} (MIT Press, 1999, with Hinrich Sch\"utze), and also of the in-press textbook \emph{Introduction to Information Retrieval} (Cambridge, 2008, with Prabhakar Raghavan and Hinrich Sch\"utze). Together with Dan Klein, I received the best paper award at the 2003 meeting of the Association for Computational Linguistics.

For NLP retreat 2008

Christopher Manning is an Associate Professor of Linguistics and Computer Science at Stanford University. His recent work has concentrated on probabilistic approaches to NLP problems, particularly statistical parsing, robust textual inference, and grammar induction. Manning coauthored the leading textbook on statistical approaches to NLP (Manning and Schuetze 1999) and has an in press book on Information Retrieval and Web Search (with Raghavan and Schuetze, CUP 2008). He is Australian; his Ph.D. is from Stanford in 1994, and he held faculty positions at Carnegie Mellon University and the University of Sydney before returning to Stanford.

For Google Faculty Summit 2008

Christopher Manning is an Associate Professor of Computer Science and Linguistics at Stanford University. His work concentrates on probabilistic approaches to NLP, particularly statistical parsing, robust textual inference, and grammar induction. Manning coauthored the leading textbook on statistical approaches to NLP (Manning and Schuetze 1999) and a new book this summer, Introduction to Information Retrieval (with Raghavan and Schuetze). Australian. Stanford Ph.D. 1994. Previous faculty positions at Carnegie Mellon University and the University of Sydney.

Textual Inference workshop 2009

Christopher Manning is an Associate Professor of Computer Science and Linguistics at Stanford University. Manning coauthored the leading textbook on statistical approaches to NLP (Manning and Schuetze 1999) and has recently coauthored Introduction to Information Retrieval (with Raghavan and Schuetze). His work concentrates on probabilistic approaches to NLP, including statistical parsing, grammar induction, named entity recognition, and machine translation. For the last 5 years he has been particularly interested in pursuing approaches to text understanding and computational semantics. His group has participated in all of the RTE Challenges, and a recent paper with Bill MacCartney on Natural Language Inference won the Coling 2008 Best Paper Award.

DARPA Machine Reading 2009

Christopher Manning is an Associate Professor of Computer Science and Linguistics at Stanford University (PhD, Stanford, 1994). Manning has coauthored leading textbooks on statistical approaches to NLP (Manning and Schuetze 1999) and information retrieval (Manning et al. 2008). His recent work concentrates on statistical parsing, text understanding and computational semantics, machine translation, and large-scale joint inference for NLP. His recent paper with Bill MacCartney on Natural Language Inference won the Coling 2008 Best Paper Award. Dan Jurafsky is an Associate Professor of Linguistics and Computer Science, by courtesy, at Stanford University (PhD, Berkeley, 1992). His recent work concentrates on semantic role labeling, prosody in speech, and lexical relation acquisition. Jurafsky recently co-authored a second edition of the standard textbook Speech and Language Processing (2008). He received a MacArthur Fellowship in 2003. Andrew Ng is an Assistant Professor of Computer Science at Stanford University (PhD, Berkeley, 2003). His research interests include machine learning theory, lexical relation acquisition, reinforcement learning for robot control, computer vision, and broad-competence AI. His group has won best paper/best student paper awards at ACL, CEAS, 3DRR and ICML. He is also a recipient of the Alfred P. Sloan Fellowship.

LinkedIN

Christopher Manning is an Associate Professor of Computer Science and Linguistics at Stanford University. Manning has coauthored leading textbooks on statistical approaches to Natural Language Processing (NLP) (Manning and Schuetze 1999) and information retrieval (Manning, Raghavan, and Schuetze, 2008), as well as linguistic monographs on ergativity and complex predicates. His recent work concentrates on probabilistic approaches to NLP problems and computational semantics, particularly including such topics as statistical parsing, robust textual inference, machine translation, grammar induction, and large-scale joint inference for NLP. He has won several best paper awards; most recently his paper with Bill MacCartney on Natural Language Inference won the Coling 2008 Best Paper Award. His Ph.D. is from Stanford in 1994, and he held faculty positions at Carnegie Mellon University and the University of Sydney before returning to Stanford.

Specialties: Natural Language Processing, Computational Linguistics

For NSF Panel 2009

Christopher Manning is an Associate Professor of Linguistics and Computer Science at Stanford University. Manning has coauthored leading textbooks on statistical approaches to Natural Language Processing (NLP) (Manning and Schuetze 1999) and information retrieval (Manning, Raghavan, and Schuetze, 2008), as well as linguistic monographs on ergativity and complex predicates. His recent work has concentrated on probabilistic approaches to NLP problems and computational semantics, particularly including such topics as statistical parsing, robust textual inference, machine translation, grammar induction, and large-scale joint inference for NLP. He also maintains interests in probabilistic approaches to linguistics and work on computational lexicography and dictionary usability, focusing particularly on Australian languages. His recent paper with Bill MacCartney on Natural Language Inference won the Coling 2008 Best Paper Award. His Ph.D. is from Stanford in 1994, and he held faculty positions at Carnegie Mellon University and the University of Sydney before returning to Stanford.

For Google 2009

Christopher Manning is an Associate Professor of Computer Science and Linguistics at Stanford University. Manning has coauthored leading textbooks on statistical approaches to Natural Language Processing (NLP) (Manning and Schuetze 1999) and information retrieval (Manning, Raghavan, and Schuetze, 2008), as well as linguistic monographs on ergativity and complex predicates. His recent work concentrates on probabilistic approaches to NLP problems and computational semantics, particularly including such topics as statistical parsing, robust textual inference, machine translation, grammar induction, and large-scale joint inference for NLP. He has won several best paper awards; most recently his paper with Bill MacCartney on Natural Language Inference won the Coling 2008 Best Paper Award. His Ph.D. is from Stanford in 1994, and he held faculty positions at Carnegie Mellon University and the University of Sydney before returning to Stanford.

For Learning Workshop 2011

Christopher Manning is an Associate Professor of Computer Science and Linguistics at Stanford University. His Ph.D. is from Stanford in 1994, and he held faculty positions at Carnegie Mellon University and the University of Sydney before returning to Stanford. Manning has coauthored leading textbooks on statistical approaches to natural language processing (Manning and Schuetze 1999) and information retrieval (Manning, Raghavan, and Schuetze, 2008), as well as linguistic monographs on ergativity and complex predicates. His recent work has concentrated on probabilistic approaches to NLP problems and computational semantics, particularly including such topics as statistical parsing, part-of-speech tagging, and named entity recognition; robust textual inference; machine translation; grammar induction; and large-scale joint inference for NLP. Recently he has been trying to swap back in memories of Rumelhart and McClelland (1986).

For Computer Forum 2012

Christopher Manning is an Associate Professor of Computer Science and Linguistics at Stanford University. His Ph.D. is from Stanford in 1994, and he held faculty positions at Carnegie Mellon University and the University of Sydney before returning to Stanford. He is a fellow of AAAI and the Association for Computational Linguistics. Manning has coauthored leading textbooks on statistical approaches to natural language processing (Manning and Schuetze 1999) and information retrieval (Manning, Raghavan, and Schuetze, 2008), as well as linguistic monographs on ergativity and complex predicates. His recent work has concentrated on probabilistic approaches to NLP problems and computational semantics, particularly including such topics as statistical parsing, robust textual inference, machine translation, large-scale joint inference for NLP, computational pragmatics, and hierarchical deep learning for NLP.

For ICLR 2013

Christopher Manning is a Professor of Computer Science and Linguistics at Stanford University. His Ph.D. is from Stanford in 1994, and he held faculty positions at Carnegie Mellon University and the University of Sydney before returning to Stanford. He is a AAAI Fellow and an ACL Fellow, and has coauthored leading textbooks on statistical approaches to natural language processing (Manning and Schuetze 1999) and information retrieval (Manning, Raghavan, and Schuetze, 2008), as well as linguistic monographs on ergativity and complex predicates. His recent work has concentrated on machine learning approaches to various NLP problems, including statistical parsing, named entity recognition, robust textual inference, machine translation, recursive deep learning models for NLP, and large-scale joint inference for NLP.

Web page around 2012

Chris Manning works on systems and formalisms that can intelligently process and produce human languages. His research concentrates on probabilistic models of language and statistical natural language processing; including text understanding, text mining, machine translation, information extraction, named entity recognition, part-of-speech tagging, probabilistic parsing and semantic role labeling, syntactic typology, computational lexicography, and other topics in computational linguistics and machine learning.

For Poetics journal 2013

Christopher Manning is a Professor of Computer Science and Linguistics at Stanford University. He is a AAAI Fellow and an ACL Fellow, and has coauthored leading textbooks on statistical approaches to natural language processing and information retrieval. His recent research concentrates on machine learning approaches to various computational linguistic problems, including parsing, semantic similarity, and textual inference.

For SWANK 2014

Christopher Manning is a Professor of Computer Science and Linguistics at Stanford University. He is a AAAI Fellow and an ACL Fellow, and has coauthored leading textbooks on statistical approaches to natural language processing and information retrieval. His recent research concentrates on machine learning approaches to various computational linguistic problems and computational semantics, including parsing, textual inference, machine translation, and hierarchical deep learning for NLP.

For EngX 2014

Christopher Manning is a professor of computer science and linguistics at Stanford University. His research goal is computers that can intelligently process, understand, and generate human language material. Manning concentrates on machine learning approaches to computational linguistic problems, including syntactic parsing, computational semantics and pragmatics, textual inference, machine translation, and hierarchical deep learning for NLP. He is an ACM Fellow, a AAAI Fellow, and an ACL Fellow, and has coauthored leading textbooks on statistical natural language processing and information retrieval. He is a member of the Stanford NLP group (@stanfordnlp).

For Tencent 2014

Christopher Manning is a professor of computer science and linguistics at Stanford University. His Ph.D. is from Stanford in 1994, and he held faculty positions at Carnegie Mellon University and the University of Sydney before returning to Stanford. His research goal is computers that can intelligently process, understand, and generate human language material. Manning concentrates on machine learning approaches to computational linguistic problems, including syntactic parsing, computational semantics and pragmatics, textual inference, machine translation, and recursive deep learning for NLP. He is an ACM Fellow, a AAAI Fellow, and an ACL Fellow, and has coauthored leading textbooks on statistical natural language processing and information retrieval. He is a member of the Stanford NLP group (@stanfordnlp).

For Rework 2016

Christopher Manning is a professor of computer science and linguistics at Stanford University. He works on software that can intelligently process, understand, and generate human language material. He is a leader in applying Deep Learning to Natural Language Processing, including exploring Tree Recursive Neural Networks, sentiment analysis, neural network dependency parsing, the GloVe model of word vectors, neural machine translation, and deep language understanding. Manning is an ACM Fellow, a AAAI Fellow, and an ACL Fellow, and has coauthored leading textbooks on statistical natural language processing and information retrieval. He is a member of the Stanford NLP group (@stanfordnlp).

CIFAR NCAP

Christopher Manning is a professor of computer science and linguistics at Stanford University. His Ph.D. is from Stanford in 1994, and he held faculty positions at Carnegie Mellon University and the University of Sydney before returning to Stanford. His research goal is computers that can intelligently process, understand, and generate human language material. Manning concentrates on machine learning approaches to computational linguistic problems, including syntactic parsing, computational semantics and pragmatics, textual inference, machine translation, and using deep learning for NLP. He is an ACM Fellow, a AAAI Fellow, and an ACL Fellow, and has coauthored leading textbooks on statistical natural language processing and information retrieval. He is a member of the Stanford NLP group (@stanfordnlp).

Christopher Manning is a professor of computer science and linguistics at Stanford University. His Ph.D. is from Stanford in 1994, and he held faculty positions at Carnegie Mellon University and the University of Sydney before returning to Stanford. His research goal is computers that can intelligently process, understand, and generate human language material. Manning concentrates on machine learning approaches to computational linguistic problems, including syntactic parsing, computational semantics and pragmatics, textual inference, machine translation, and deep learning for NLP. He is an ACM Fellow, a AAAI Fellow, and an ACL Fellow, and has coauthored leading textbooks on statistical natural language processing and information retrieval. He is a member of the Stanford NLP group (@stanfordnlp).

Neural Machine Translation tutorial, 2016

Christopher Manning Stanford University, manning@stanford.edu, @chrmanning Christopher Manning is a professor of computer science and linguistics at Stanford University. He works on software that can intelligently process, understand, and generate human language material. He is a leader in applying Deep Learning to Natural Language Processing, including exploring Tree Recursive Neural Networks, sentiment analysis, neural network dependency parsing, the GloVe model of word vectors, neural machine translation, and deep language understanding. Manning is an ACM Fellow, a AAAI Fellow, and an ACL Fellow, and has coauthored leading textbooks on statistical natural language processing and information retrieval. He is a member of the Stanford NLP group (@stanfordnlp).

LinkedIN 2017

Christopher Manning is a Professor of Computer Science and Linguistics at Stanford University. He works on software that can intelligently process, understand, and generate human language material. He is a leader in applying Deep Learning to Natural Language Processing, with well-known research on Tree Recursive Neural Networks, sentiment analysis, neural network dependency parsing, the GloVe model of word vectors, neural machine translation, and deep language understanding. He also focuses on computational linguistic approaches to parsing, robust textual inference and multilingual language processing, including being a principal developer of Stanford Dependencies and Universal Dependencies. Manning has coauthored leading textbooks on statistical approaches to Natural Language Processing (NLP) (Manning and Schütze 1999) and information retrieval (Manning, Raghavan, and Schütze, 2008), as well as linguistic monographs on ergativity and complex predicates. Manning is an ACM Fellow, a AAAI Fellow, an ACL Fellow, and Past President of the ACL. Research of his has won ACL, Coling, EMNLP, and CHI Best Paper Awards. He has a B.A. (Hons) from The Australian National University, a Ph.D. is from Stanford in 1994, and he held faculty positions at Carnegie Mellon University and the University of Sydney before returning to Stanford. He is a member of the Stanford NLP group (@stanfordnlp) and manages development of the Stanford CoreNLP software.

NSF 2017

Christopher Manning is a professor of computer science and linguistics at Stanford University. He works on software that can intelligently process, understand, and generate human language material. He is a leader in applying Deep Learning to Natural Language Processing, including exploring Tree Recursive Neural Networks, sentiment analysis, neural network dependency parsing, the GloVe model of word vectors, neural machine translation, and deep language understanding. He also focuses on computational linguistic approaches to parsing, robust textual inference and multilingual language processing, including being a principal developer of Stanford Dependencies and Universal Dependencies. Manning is an ACM Fellow, a AAAI Fellow, an ACL Fellow, and a Past President of ACL. He has coauthored leading textbooks on statistical natural language processing and information retrieval. He is a member of the Stanford NLP group (@stanfordnlp) and manages development of the Stanford CoreNLP software.

100 words for Harker Programming Invitational

Christopher Manning is a professor of computer science and linguistics at Stanford University. He works on software that can intelligently process, understand, and generate human language material. He is a leader in applying Deep Learning to Natural Language Processing and has explored tree recursive neural networks, the GloVe word vectors, neural machine translation, parsing, and multilingual language processing, including developing Stanford Dependencies and Universal Dependencies. Manning is an ACM Fellow, a AAAI Fellow, an ACL Fellow, and a Past President of ACL. He is a member of the Stanford NLP group (@stanfordnlp) and manages development of the Stanford CoreNLP software.

Tsinghua 2017

Christopher Manning is the Thomas M. Siebel Professor in Machine Learning at Stanford University, in the Departments of Computer Science and Linguistics. He works on software that can intelligently process, understand, and generate human language material. He is a leader in applying Deep Learning to Natural Language Processing, with well-known research on Tree Recursive Neural Networks, sentiment analysis, neural network dependency parsing, the GloVe model of word vectors, neural machine translation, and deep language understanding. His computational linguistics work also covers probabilistic models of language, natural language inference and multilingual language processing, including being a principal developer of Stanford Dependencies and Universal Dependencies. Manning has coauthored leading textbooks on statistical approaches to Natural Language Processing (NLP) (Manning and Schütze 1999) and information retrieval (Manning, Raghavan, and Schütze, 2008), as well as linguistic monographs on ergativity and complex predicates. Manning is an ACM Fellow, a AAAI Fellow, an ACL Fellow, and Past President of the ACL. Research of his has won ACL, Coling, EMNLP, and CHI Best Paper Awards. He has a B.A. (Hons) from The Australian National University, a Ph.D. from Stanford in 1994, and he held faculty positions at Carnegie Mellon University and the University of Sydney before returning to Stanford. He is a member of the Stanford NLP group (@stanfordnlp) and manages development of the Stanford CoreNLP software.

Website 2019

Christopher Manning is the inaugural Thomas M. Siebel Professor in Machine Learning in the Departments of Computer Science and Linguistics at Stanford University and Director of the Stanford Artificial Intelligence Laboratory (SAIL). His research goal is computers that can intelligently process, understand, and generate human language material. Manning is a leader in applying Deep Learning to Natural Language Processing, with well-known research on Tree Recursive Neural Networks, the GloVe model of word vectors, sentiment analysis, neural network dependency parsing, neural machine translation, question answering, and deep language understanding. He also focuses on computational linguistic approaches to parsing, robust textual inference and multilingual language processing, including being a principal developer of Stanford Dependencies and Universal Dependencies. Manning has coauthored leading textbooks on statistical approaches to Natural Language Processing (NLP) (Manning and Schütze 1999) and information retrieval (Manning, Raghavan, and Schütze, 2008), as well as linguistic monographs on ergativity and complex predicates. He is an ACM Fellow, a AAAI Fellow, and an ACL Fellow, and a Past President of the ACL (2015). His research has won ACL, Coling, EMNLP, and CHI Best Paper Awards. He has a B.A. (Hons) from The Australian National University and a Ph.D. from Stanford in 1994, and he held faculty positions at Carnegie Mellon University and the University of Sydney before returning to Stanford. He is the founder of the Stanford NLP group (@stanfordnlp) and manages development of the Stanford CoreNLP software.

90 words for Alexa Prize

Christopher Manning is a professor of computer science and linguistics at Stanford University and Director of the Stanford AI Lab. He is a leader in applying deep neural networks to Natural Language Processing, including work on tree recursive models, sentiment analysis, neural machine translation and parsing, and the GloVe word vectors. He founded the Stanford NLP group (@stanfordnlp), developed Stanford Dependencies and Universal Dependencies, and manages development of the Stanford CoreNLP software. Manning is an ACM, AAAI, and ACL Fellow, and a Past President of ACL.

Bengio Turing Prize letter

Christopher Manning is a Professor at Stanford University, who has worked on Natural Language Processing since 1993. Manning is an ACM Fellow, a AAAI Fellow, an ACL Fellow, and Past President of the ACL (2015). He is the most-cited researcher within the field of NLP, and he has won best paper awards at ACL, Coling, EMNLP, and CHI. He is a leader in applying deep learning to NLP, with well-known work on sentiment analysis, dependency parsing, the GloVe model of word vectors, neural machine translation, question answering, and summarization.

Tenure letter 2019

Christopher Manning is the inaugural Thomas M. Siebel Professor in Machine Learning in the Departments of Computer Science and Linguistics at Stanford University and Director of the Stanford Artificial Intelligence Laboratory (SAIL). Manning is an ACM Fellow, a AAAI Fellow, an ACL Fellow, and Past President of the ACL (2015). He has worked on Natural Language Processing since 1993 and is the most-cited researcher within the NLP field, with best paper awards at ACL, Coling, EMNLP, and CHI. Manning has coauthored leading textbooks on statistical approaches to Natural Language Processing (Manning and Schütze 1999) and information retrieval (Manning, Raghavan, and Schütze, 2008), as well as linguistic monographs on ergativity and complex predicates. He is a leader in applying deep learning to NLP, with well-known work on sentiment analysis, dependency parsing, the GloVe model of word vectors, neural machine translation, question answering, and summarization.

Tenure letter 2020, 100 words

Christopher Manning is a professor of computer science and linguistics at Stanford University, Director of the Stanford Artificial Intelligence Lab, and an Associate Director of the Stanford Institute for Human-Centered AI. He is a leader in applying deep neural networks to Natural Language Processing, including work on tree-recursive models, neural machine translation, parsing, question answering, and the GloVe word vectors. Manning founded the Stanford NLP group (@stanfordnlp), developed Stanford Dependencies and Universal Dependencies, manages development of the Stanford CoreNLP software, is the most-cited researcher in NLP, and is an ACM, AAAI, and ACL Fellow and a Past President of ACL.

With teaching 2020

Christopher Manning is a professor of computer science and linguistics at Stanford University, Director of the Stanford Artificial Intelligence Lab (SAIL), and an Associate Director of the Stanford Institute for Human-Centered AI (HAI). He is a leader in applying deep neural networks to Natural Language Processing (NLP), including work on tree-recursive models, neural machine translation, parsing, question answering, and the GloVe word vectors. Manning founded the Stanford NLP group (@stanfordnlp), teaches and has written textbooks for NLP (CS 224N) and information retrieval (CS 276), co-developed Stanford Dependencies and Universal Dependencies, manages development of the Stanford CoreNLP software, is the most-cited researcher in NLP, and is an ACM, AAAI, and ACL Fellow and a Past President of ACL.

Longer 2021

Christopher Manning is the inaugural Thomas M. Siebel Professor in Machine Learning in the Departments of Computer Science and Linguistics at Stanford University, Director of the Stanford Artificial Intelligence Laboratory (SAIL), and an Associate Director of the Stanford Institute for Human-Centered AI (HAI). He is an ACM Fellow, a AAAI Fellow, an ACL Fellow, and Past President of the ACL (2015). His research goal is computers that can intelligently process, understand, and generate human language material. Manning has worked on Natural Language Processing (NLP) since 1992 and is the most-cited researcher within NLP, with best paper awards at ACL, Coling, EMNLP, and CHI, and with well-known work on applying deep neural networks to NLP, including on tree-recursive models, neural machine translation, parsing, sentiment analysis, natural language inference, question answering, summarization, and the GloVe word vectors. He is the founder of the Stanford NLP group (@stanfordnlp) and manages development of the Stanford CoreNLP software. Manning has coauthored leading textbooks on statistical approaches to Natural Language Processing (Manning and Schütze 1999) and information retrieval (Manning, Raghavan, and Schütze, 2008), as well as linguistic monographs on ergativity and complex predicates. He has a B.A. (Hons) from The Australian National University and a Ph.D. from Stanford in 1994, and he held faculty positions at Carnegie Mellon University and the University of Sydney before returning to Stanford.

NAACL 2022

Christopher Manning is a professor of linguistics and computer science at Stanford University, Director of the Stanford Artificial Intelligence Lab (SAIL), and an Associate Director of the Stanford Institute for Human-Centered AI (HAI). He is a leader in applying deep neural networks to natural language processing (NLP), including work on neural machine translation, tree-recursive models, natural language inference, summarization, parsing, question answering, and the GloVe word vectors. Manning founded the Stanford NLP group (@stanfordnlp), teaches and has co-written textbooks for NLP (CS 224N) and information retrieval (CS 276), co-developed Stanford Dependencies and Universal Dependencies, manages development of the Stanford CoreNLP and Stanza software, is the most-cited researcher in NLP, and is an ACM, AAAI, and ACL Fellow and a Past President of ACL.

Longer 2022

Christopher Manning is the inaugural Thomas M. Siebel Professor in Machine Learning in the Departments of Computer Science and Linguistics at Stanford University, Director of the Stanford Artificial Intelligence Laboratory (SAIL), an Associate Director of the Stanford Institute for Human-Centered AI (HAI), and an Investment Partner at AIX Ventures. He is an ACM Fellow, a AAAI Fellow, an ACL Fellow, and Past President of the ACL (2015). His research goal is computers that can intelligently process, understand, and generate human language. Manning is the most-cited researcher within NLP, with best paper awards at ACL, Coling, EMNLP, and CHI and well-known work on applying deep neural networks to NLP, including neural machine translation, parsing, sentiment analysis, natural language inference, question answering, and summarization. He founded the Stanford NLP group (@stanfordnlp) and manages development of the Stanford CoreNLP and Stanza software. Manning has coauthored leading textbooks on statistical Natural Language Processing (Manning and Schütze 1999) and information retrieval (Manning, Raghavan, and Schütze, 2008), as well as linguistic monographs. He has a B.A. (Hons) from The Australian National University and a Ph.D. from Stanford in 1994, and he held faculty positions at Carnegie Mellon University and the University of Sydney before returning to Stanford.

Shorter verbal 2022

Christopher Manning is a Professor in the Departments of Computer Science and Linguistics at Stanford University, Director of SAIL, the Stanford Artificial Intelligence Laboratory, and an Associate Director at HAI, the Stanford Institute for Human-Centered AI. His research is on computers that can intelligently process, understand, and generate human language. Chris is the most-cited researcher within NLP, with best paper awards at the ACL, Coling, EMNLP, and CHI conferences and very well-known work on applying deep neural networks to NLP. He founded the Stanford NLP group, has written widely used NLP textbooks, and teaches the popular NLP class CS224N, which is also available online.

Homepage bio mid 2023

Christopher Manning is the inaugural Thomas M. Siebel Professor in Machine Learning in the Departments of Linguistics and Computer Science at Stanford University, Director of the Stanford Artificial Intelligence Laboratory (SAIL), and an Associate Director of the Stanford Institute for Human-Centered Artificial Intelligence (HAI). His research goal is computers that can intelligently process, understand, and generate human languages. Manning is a leader in applying Deep Learning to Natural Language Processing (NLP), with well-known research on the GloVe model of word vectors, attention, machine translation, question answering, self-supervised model pre-training, tree-recursive neural networks, machine reasoning, dependency parsing, sentiment analysis, and summarization. He also focuses on computational linguistic approaches to parsing, natural language inference and multilingual language processing, including being a principal developer of Stanford Dependencies and Universal Dependencies. Manning has coauthored leading textbooks on statistical approaches to NLP (Manning and Schütze 1999) and information retrieval (Manning, Raghavan, and Schütze, 2008), as well as linguistic monographs on ergativity and complex predicates. He is an ACM Fellow, a AAAI Fellow, and an ACL Fellow, and a Past President of the ACL (2015). His research has won ACL, Coling, EMNLP, and CHI Best Paper Awards. He has a B.A. (Hons) from The Australian National University, a Ph.D. from Stanford in 1994, and an Honorary Doctorate from U. Amsterdam in 2023, and he held faculty positions at Carnegie Mellon University and the University of Sydney before returning to Stanford. He is the founder of the Stanford NLP group (@stanfordnlp) and manages development of the Stanford CoreNLP and Stanza software.

Shorter 2023

Christopher Manning is the inaugural Thomas M. Siebel Professor in Machine Learning in the Departments of Computer Science and Linguistics at Stanford University, Director of the Stanford Artificial Intelligence Laboratory (SAIL), and an Associate Director at the Stanford Institute for Human-Centered Artificial Intelligence (HAI). His research is on computers that can intelligently process, understand, and generate human language. Chris is the most-cited researcher within NLP, with best paper awards at the ACL, Coling, EMNLP, and CHI conferences and an ACL Test of Time award for his pioneering work on applying neural network or deep learning approaches to human language understanding. He founded the Stanford NLP group, has written widely used NLP textbooks, and teaches the popular NLP class CS224N, which is also available online.

Longer 2023

Christopher Manning is the inaugural Thomas M. Siebel Professor in Machine Learning in the Departments of Linguistics and Computer Science at Stanford University, Director of the Stanford Artificial Intelligence Laboratory (SAIL), and an Associate Director of the Stanford Institute for Human-Centered Artificial Intelligence (HAI). His research goal is computers that can intelligently process, understand, and generate human languages. Manning is best-known as a leader in applying Deep Learning to Natural Language Processing (NLP), with well-known early research on the GloVe model of word vectors, attention, self-supervised model pre-training, tree-recursive neural networks, and machine reasoning. Earlier on he worked on probabilistic NLP models for parsing, sequence tagging, and grammar induction and he also focuses on computational linguistic approaches to natural language inference and multilingual language processing, including being a principal developer of Stanford Dependencies and Universal Dependencies. Manning manages development of the open-source Stanford CoreNLP and Stanza software, and his software and algorithms are used in the systems of many companies for tasks such as sentiment analysis, machine translation, dependency parsing, summarization, and question answering. Manning has coauthored leading textbooks on statistical approaches to NLP (Manning and Schütze 1999) and information retrieval (Manning, Raghavan, and Schütze, 2008), as well as linguistic monographs on ergativity and complex predicates. He is an ACM Fellow, a AAAI Fellow, and an ACL Fellow, and a Past President of the ACL (2015). His research has won ACL, Coling, EMNLP, and CHI Best Paper Awards and an ACL Test of Time award. He has a B.A. (Hons) from The Australian National University, a Ph.D. from Stanford in 1994, and an Honorary Doctorate from U. Amsterdam in 2023, and he held faculty positions at Carnegie Mellon University and the University of Sydney before returning to Stanford, where he founded the Stanford NLP group (@stanfordnlp).

C3 AI 2024

Christopher Manning is the inaugural Thomas M. Siebel Professor of Machine Learning in the Departments of Linguistics and Computer Science at Stanford University, Director of the Stanford Artificial Intelligence Laboratory (SAIL), and an Associate Director of the Stanford Institute for Human-Centered Artificial Intelligence (HAI). His research goal is computers that can intelligently process, understand, and generate human languages. Manning is a leader in applying Deep Learning to Natural Language Processing (NLP), with pioneering research on the GloVe model of word vectors, attention, self-supervised model pre-training, tree-recursive neural networks, machine reasoning, and direct preference optimization. Earlier on, he worked on probabilistic NLP models for parsing, sequence tagging, and grammar induction and he also focuses on computational linguistic approaches to natural language inference and multilingual language processing, including being a principal developer of Stanford Dependencies and Universal Dependencies. Manning manages development of the open-source Stanford CoreNLP and Stanza software, and his software and algorithms are used in the systems of many companies for tasks such as sentiment analysis, machine translation, dependency parsing, summarization, and question answering. Manning has coauthored leading textbooks on statistical approaches to NLP (Manning and Schütze 1999) and information retrieval (Manning, Raghavan, and Schütze, 2008), and teaches the popular NLP class CS224N, which is available online. He is an ACM Fellow, a AAAI Fellow, and an ACL Fellow, and a Past President of the ACL (2015). His research has won the 2024 IEEE John von Neumann Medal; ACL, Coling, EMNLP, and CHI Best Paper Awards; and a 2023 ACL Test of Time award. He has a B.A. (Hons) from The Australian National University, a Ph.D. from Stanford in 1994, and an Honorary Doctorate from U. Amsterdam in 2023, and he held faculty positions at Carnegie Mellon University and the University of Sydney before returning to Stanford, where he founded the Stanford NLP group (@stanfordnlp).


Christopher Manning
Last modified: Sun Apr 25 18:06:11 PDT 2004