Learning to Learn
without forgetting

Unifying Continual Learning and Meta-Learning
with Meta-Experience Replay

Scroll Down

Paper

Learning to Learn without Forgetting

by Maximizing Transfer and Minimizing Interference

by Matt Riemer, Ignacio Cases, Robert Ajemian, Miao Liu, Irina Rish, Yuhai Tu, Gerald Tesauro

We have recently developed Meta-Experience Replay (MER), a new framework for integrating meta-learning and Experience Replay for continual learning. It combines an efficient meta-learning algorithm called Reptile with a widely successful technique for stabilizing reinforcement learning called Experience Replay. Meta-Experience Replay achieves state-of-the-art performance on continual learning benchmarks and is mathematically similar to Gradient Episodic Memory. Our algorithm optimizes for a new approach to the deep continual learning problem. We hope it will motivate new research towards applying meta-learning to continual learning settings.

Read more in our blog post at ibm.com | Download the ICLR paper

Download the datasets and code

We used several variants of MNIST used in the experiments, including Rotations, Permutations, and Many Permutations. The links to the datasets are provided here for reference, but it is perhaps more convenient to use the scripts provided with the code to automatically download and uncompress all three datasets with the code (30 GB of storage needed). The links to the code are provided below – it would need to be slightly extended to be applied to other interesting continual learning benchmarks like CIFAR-100 or Omniglot.

The original MNIST database is available here. The terms of use of the MNIST variants below follow the terms of use for the original MNIST distribution as stated here. Therefore, as well as the original MNIST dataset, the variants are made available under the terms of the Creative Commons Attribution-Share Alike 3.0 license.

The interface for generating your own MNIST variants is provided as part of the GEM project. To maximize reproducibility, we have provided an interface for directly downloading the dataset versions used in our experiments.

MNIST Permutations

MNIST Permutations is a variant of MNIST first proposed in Kirkpatrick et al. (2017) where each task is transformed by a fixed permutation of the MNIST pixels. This means that the input distribution of each task is unrelated.

Download MNIST Permutations

MNIST Rotations

MNIST Rotations is another variant of MNIST proposed in Lopez-Paz & Ranzato (2017) where each task contains digits rotated by a fixed angle between 0 and 180 degrees. We follow the standard benchmark setting from Lopez-Paz & Ranzato (2017) for both MNIST-Permutations and MNIST-Rotations. In this setting the training data consists of 1000 ampled examples across each of 20 tasks that are each created by random permutations or rotations of MNIST digits.

Download MNIST Rotations

MNIST Many Permutations

MNIST Many Permutations is a more challenging variant of MNIST Permutations that we developed for our experiments. Many Permutations is more non-stationary than the normal MNIST Permutations benchmark in that it includes more frequent task switches. There are 5 times as many tasks (100) and 5 times fewer examples per task in the benchmark (200).

Download MNIST Many Permutations

Code

Code release to generate these datasets as an extension of the GEM project to ease reproducibility. As such, we provide our data for these benchmarks only to ensure increased reproducibility of our results.

Download code

Contact Us

Matthew Riemer | Ignacio Cases

IBM Research | Stanford University | MIT