This talk is part of the NLP Seminar Series.

Language Agents: Past, Present and Future

Karthik Narasimhan, Princeton University
Date: 11:00am - 12:00pm, January 11th 2024
Venue: Room 287, Gates Computer Science Building

Abstract

While large language models (LLMs) have raised text generation and conversational systems to a new level, they also present exciting new opportunities for building artificial agents with improved decision making capabilities. Specifically, the ability to reason with language can allow us to build agents that can execute complex action sequences and learn new skills by "reading" in addition to "doing". In this talk, I will discuss our work in the rapidly growing area of language agents, describing their trajectory from past efforts (pre-LLM) to current systems, as well as presenting potential directions for future improvements.

Bio

Karthik Narasimhan is an assistant professor in Computer Science at Princeton University and also serves as co-director of the Princeton NLP group and associate director of Princeton Language & Intelligence (PLI). His research spans the areas of natural language processing and reinforcement learning, with the goal of building intelligent agents that learn to operate in the world through both their own experience ("doing things") and leveraging existing human knowledge ("reading about things"). Karthik received his PhD from MIT in 2017, and spent a year as a visiting research scientist at OpenAI contributing to the very first GPT language model, prior to joining Princeton in 2018. His research has been recognized by the NSF CAREER, a Google Research Scholar Award, an Amazon research award (2019), Bell Labs runner-up prize and outstanding paper awards at EMNLP (2015, 2016) and NeurIPS (2022).