This talk is part of the NLP Seminar Series.

Finetuned Language Models are Zero-Shot Learners

Jason Wei, Google Brain
Date: 11:00am - 12:00 noon PT, Jan 20 2022
Venue: Zoom (link hidden)

Abstract

This talk will be about our paper "Finetuned Language Models are Zero-Shot Learners", which explores a simple method for improving the zero-shot learning abilities of language models. We show that instruction tuning—finetuning language models on a collection of tasks described via instructions—substantially boosts zero-shot performance on unseen tasks. We take a 137B parameter pretrained language model and instruction-tune it on over 60 NLP tasks verbalized via natural language instruction templates. We evaluate this instruction-tuned model, which we call FLAN, on unseen task types. FLAN substantially improves the performance of its unmodified counterpart and surpasses zero-shot 175B GPT-3 on 20 of 25 tasks that we evaluate. FLAN even outperforms few-shot GPT-3 by a large margin on ANLI, RTE, BoolQ, AI2-ARC, OpenbookQA, and StoryCloze. Ablation studies reveal that number of tasks and model scale are key components to the success of instruction tuning.

Bio

Jason Wei is a research engineer at Google Brain working on natural language processing. His research interests involve big language models—their emergent capabilities, how to make them useful, and how to accurately characterize their strengths and limitations. Other themes in his research include using computational models of language as a window into the human brain. Before Google, he got his AB in computer science from Dartmouth College.