Communication is an intricate dance, an ensemble of coordinated individual actions. Imagine a future where machines interact with us like humans, waking us up in the morning, navigating us to work, or discussing our daily schedules in a coordinated and natural manner. Current interactive systems being developed by Apple, Google, Microsoft, and Amazon attempt to reach this goal by combining a large set of single-task systems. But products like Siri, Google Now, Cortana and Echo still follow pre-specified agendas that cannot transition between tasks smoothly and track and adapt to different users naturally. My research draws on recent developments in speech and natural language processing, human-computer interaction, and machine learning to work towards the goal of developing situated intelligent interactive systems. These systems can coordinate with users to achieve effective and natural interactions. I have successfully applied the proposed concepts to various tasks, such as social conversation, job interview training and movie promotion. My team's proposal on engaging social conversation systems was selected to receive $100,000 from Amazon Inc. to compete in the Amazon Alexa Prize Challenge.
Zhou is an Assistant Professor at the Computer Science Department in UC Davis. She got her PhD in the Language Technology Institute under School of Computer Science, Carnegie Mellon University, working with Prof. Alan W Black and Prof. Alexander I. Rudnicky. She interned with Prof. David Suendermann-Oeft in ETS San Francisco Office on cloud based mulitmodal dialog systems in 2015 summer and 2016 summer. She interned with Dan Bohus and Eric Horvitz in Microsoft Research on Human-Robot Interaction in 2014 Fall. Prior to CMU, I received a B.S. in Computer Science and a B.A. in Linguistics from Zhejiang University in 2011. I worked with Prof. Xiaofei He and Prof. Deng Cai on Machine Learning and Computer Vision, and Prof. Yunhua Qu on Machine Translation.