I will explore language emergence with deep reinforcement learning in this presentation. First, we will highlight the research purposes on language emergence. We will then give an overview of the classic setting and a quick recap of RL methods. Second, we will pinpoint that has stuck to small-scale experiments, which may have led to some misleading conclusions. We thus explore how to scale up the language emergent framework and detail the many takeaways during this journey: stabilizing RL in large language games, which metric may scale etc. Finally, we tackle the hypothesis that larger agent populations may lead to better quality language and thus simulate a large population.
Florian Strub is Research scientist at DeepMind. He did his Ph.D. at the University of Lille, in the Inria SequeL team. He works on interleaving ideas from computer vision, natural language, and reinforcement learning to design new research settings, and recently study game theory technique for language emergence.