The rapid advancement of large language models (LLMs) has unlocked a myriad of possibilities for positive societal impact, ranging from enhancing accessibility and communication to supporting disaster response and public health initiatives. However, the deployment of these technologies also raises critical concerns regarding accountability, fairness, transparency, and ethical use. In this talk, I will discuss our efforts for auditing NLP models, detecting and mitigating biases, and understanding how LLMs make decisions. We hope to open the conversation to foster a community-wide effort towards more accountable and inclusive NLP practices.
Jieyu Zhao is an assistant professor of Computer Science Department at University of Southern California where she is leading the LIME lab. Prior to that, she was an NSF Computing Innovation Fellow at University of Maryland, College Park. Jieyu received her Ph.D. from Computer Science Department, UCLA. Her research interest lies in fairness of ML/NLP models. Her research has been covered by news media such as Wires, The Daily Mail and so on. She was invited by UN-WOMEN Beijing on a panel discussion about gender equality and social responsibility.