Currently there is a gap in how conversational agents are developed for consumers and for research projects. Industry practice focuses on full control and explainability of the agent's behavior, while researchers emphasize data-driven approaches that are easier to scale, but at the cost of lesser guarantees over the agent's behavior. Collecting high quality datasets for task-oriented dialogues remains a tough challenge. I will talk about an approach which combines automation, in the form of "dialogue self-play" between an agent and a simulated user, and human intelligence, in the form of targeted crowdsourcing, for rapidly bootstrapping conversational agents in arbitrary domains. This can lead to greater control of the agent's behavior while also having a data-driven method for improving the agent with more experience.
Pararth is a Research Engineer at Google AI, working on deep learning based methods for building conversational agents. His research focuses on reinforcement learning applied to optimization of dialogue agents with reward signals derived from interactions with users. Previously, he was an early employee at a startup building a more intelligent software development platform. He completed his Masters and Bachelors in Computer Science at Stanford and IIT-Bombay, respectively.