This talk is part of the NLP Seminar Series.

Teach Language Models to Reason

Denny Zhou, Google Brain
Date: 11:00am - 12:00pm, March 2nd 2023
Venue: Room 287, Gates Computer Science Building; Zoom (link hidden)

Abstract

LLMs have caused a huge paradigm shift in NLP/AI research. By using Google's newest LLM PaLM-540B, chain-of-thought prompting (CoT) combined with self-consistency decoding (SC) has demonstrated striking performance on many NLP/AI tasks -- crushed sota results in the literature using only 1% or even 0.1% annotated examples while being fully interpretable. More recently, least-to-most prompting, which decomposes complex problems into simpler subproblems, have effectively solved compositional generalization benchmarks like SCAN and CFQ. Furthermore, instruction tuning (FLAN2) greatly improves the zero-shot ability of LLMs, and plays a pivotal role for building conversational LLMs. In this talk, I'll be presenting these techniques that we have developed.

Bio

Denny Zhou is a Principal Scientist / Research Director in Google Brain, where he founded and leads the Reasoning team with a focus on LLM reasoning. He also led SpreadSheetCoder which has been integrated to Google Sheets to automatically generate formulas for users, and MobileBERT which has been adopted in mobile apps. Denny received Google Research Impact Award and WSDM2022 Test of Time Award. He was (senior) area chairs for NeurIPS, ICML and ICLR. For more information about him, please check his homepage, scholar and twitter.