This talk is part of the NLP Seminar Series. Unfortunately, this week's talk is not open to the public.

Pitfalls of static language modelling

Angeliki Lazaridou, DeepMind
Date: 10:00am - 11:00am PT, Feb 11 2021
Venue: Zoom (link hidden)

Abstract

Our world is open-ended, non-stationary and constantly evolving; thus what we talk about and how we talk about it changes over time. This inherent dynamic nature of language comes in stark contrast to the current static language modelling paradigm, which constructs training and evaluation sets from overlapping time periods. In this talk, I will describe our set of experiments and results on taking current state-of-the art models and placing them in the realistic scenario of predicting future utterances from beyond the models' training period. In particular, we find that Transformer models perform worse in this realistic setup -- a consistent pattern across three datasets from two domains -- and find that, while increasing model size alone -- a key driver behind recent progress -- does not provide a solution for the temporal generalization problem, having models that continually update their knowledge with new information can indeed slow down the degradation over time. FInally, I will discuss ideas around adaptive language models that can remain up-to-date with respect to our ever-changing and non-stationary world.

Bio

Angeliki is a research scientist at DeepMind, where she works on grounded language understanding and communication, especially in multi-agent systems. She received her PhD from the University of Trento and her MSc in Computational Linguistics at the University of Saarland.