Signed languages are complex natural languages that rely on multiple articulators moving simultaneously in 3D space. Despite recent advances in AI and language modeling, developing technology that can truly understand and work with signed languages remains challenging. This talk will introduce the unique linguistic properties of American Sign Language (ASL) that challenge traditional language modeling approaches, and discuss current progress in signed language technology. Then, I will dive deeper into two of our recent works that address challenges in signed language NLP and explore exciting linguistic opportunities: (1) NLP tools to make STEM education more accessible to deaf and hard-of-hearing students, (2) computational linguistics to investigate pressures for communicative efficiency in ASL handshapes. Finally, I will conclude with a brief preview of our ongoing work (hint: cognitive science of sign language models!?) and future directions for signed language NLP research. I hope this talk will showcase how signed language modeling is a rich and exciting research area, especially for NLP academics!
Kayo Yin is a PhD student at UC Berkeley advised by Jacob Steinhardt and Dan Klein. She currently works on LLM interpretability and NLP for signed languages. Before that, she was a Master’s student at Carnegie Mellon University advised by Graham Neubig, and completed her undergraduate studies at École Polytechnique. Her research has been recognized by the ACL 2023 Best Resource Paper award, EMNLP 2022 Best Paper Honorable Mention award, ACL 2021 Best Theme Paper award, a Siebel Scholarship, the Thomas Clarkson medal, and the Vitalik Buterin Fellowship on AI Safety