Generalisation, like many other things in the world, is one of these things that seems to have a very clear meaning, until you start to really think about it. In this talk, I will talk about what generalisation means for NLP models, what are our options for evaluating it and what are the main challenges when it comes to evaluating generalisation in models trained on realistic data. To illustrate and further explore these challenges, I discuss one particular type of generalisation in detail: compositional generalisation, and discuss what are concerns when evaluating it and what are the challenges when considering compositional generalisation in natural language.
Dieuwke Hupkes is a research scientist Facebook AI Research in Paris. Previously, she was a post-doctoral researcher at the University of Amsterdam. In her research, she studies (neural) models of language processing, in which she tries both to incorporate knowledge from linguistics and philosophy of language. She is particularly excited about what neural models might teach us about language and human language processing.