This talk is part of the NLP Seminar Series.

Seeing Like a Language Model

Ari Holtzman, University of Chicago
Date: 11:00am - 12:00 noon PT, Thursday, April 2nd
Venue: Room 287, Gates Computer Science Building
Zoom: https://stanford.zoom.us/j/93941842999?pwd=vH7x9wB9bfuIaV1HnQthRmqA8BKTGh.1

Abstract

How does a language model perceive its input? What aspects of reality does it find legible and which elude it? How can we know? Current approaches to studying LLMs—focused on engineering progress—are insufficiently exploratory. I will discuss new approaches we have been incubating and more conceptually what it means for interpretability approaches to be predictive rather than mechanistic, defend prompting as a form of scientific inquiry, and caution against formalizing concepts too early, without doing the required amount of stamp collecting.

Bio

Ari Holtzman is an Assistant Professor of Computer Science and Data Science at the University of Chicago, where he leads Conceptualization Lab. His motto is: 'I'm doing it with LLMs or I'm not doing it at all.'