Building Assistant technologies requires ability to answer any question and scaling up NLP model development by training on just a few examples. In this presentation, we will present recent research in Question Answering and Fine tuning, specifically, understanding fine-tuning (outstanding paper award at ACL 2021) and bridging the gap between dense and sparse question answering retrievers.
Sonal leads a team of research scientists and machine learning engineers in the Facebook Conversational AI team working on smarter language understanding. She is interested in building machine learning models for conversational AI systems. Previously, she worked on Viv's NLP platform, which was acquired by Samsung and released as Bixby 2.0 and Bixby Developer Center. She got a PhD in NLP from Stanford University in bootstrapping information extraction using very few examples, and has a Masters from UT Austin where I worked on multi-modal vision+NLP models.