This talk is part of the NLP Seminar Series.

Principles of Compositionality Improve Systematic Generalization of Neural Networks

Róbert Csordás, Swiss AI lab IDSIA
Date: 11:00am - 12:00 noon PT, Jun 2 2022
Venue: Zoom (link hidden)

Abstract

Systematic generalization is one of the most important open problems of neural models: given a model trained to solve a certain problem, it will often fail on a test problem with different statistics than the training one, even if the problem should be solvable by the same algorithm. This indicates that the neural network relies on superficial statistics and memorization instead of learning algorithms. In contrast, the workhorse of human problem solving is composition: it allows us to recombine solutions to known subproblems to solve problems we have never seen before. By introducing simple properties that seem essential for compositionality into the model architecture, we show dramatic improvements in the generalization ability of transformers.

Bio

Robert is a PhD candidate at the Swiss AI lab IDSIA, working with Prof. Jürgen Schmidhuber. He works on systematic generalization, mainly in the context of algorithmic reasoning. This drives his research interest in network architectures (Transformers, DNC, graph networks) with inductive biases like information routing (attention, memory) and learning modular structures. His goal is to create a system that can learn generally applicable rules instead of pure pattern matching but with minimal hardcoded structure. He considers the lack of systematic generation the main obstacle to a more generally applicable artificial intelligence. Before starting his PhD, he received a master's degree from Budapest University of Technology and Economics and worked as a research scientist at AImotive on developing self-driving cars.