Very large pre-trained models with billions (or even trillions) of parameters have made conversational AI models much better at general conversation that is engaging and fluent. However conversational agents still behave in problematic ways. Today the faults I will focus on are that they confidently assert falsehoods, can say all kinds of toxic or unsafe things, and are generally not especially well-behaved. I will go over work we have done at Facebook AI Research to assess and improve overall chatting ability, make chatbots slightly better-behaved by getting them to own their ignorance, and steer them away from offensive utterances.
Arxiv links for the work covered:
Y-Lan Boureau is a research scientist at Facebook Artificial Intelligence Research, working on dialog systems, grounding, and communication, and deep learning more generally. Prior to that, she was a post-doctoral researcher and a Simons junior fellow at New York University. She obtained her PhD in 2012 advised by Yann LeCun and Jean Ponce.