Isabel Papadimitriou

(she)


Email: isabelvp at stanford

my google scholar page

toizzy on github

isabelpapad on twitter

Hello!

I'm a sixth-year (on the job market!) PhD student at Stanford in the Natural Language Processing group, advised by Dan Jurafsky.

I work on understanding and defining the capabilities of large language models in relation to the human language system. My research focuses on creating empirical testing grounds for understanding how language models 1) learn and 2) use abstract structure and generalizations.

I am especially interested in pursuing an interdisciplinary research program, combining computational empirical machine learning methods with theories of structure and meaning in human language. My principal interests include: how language models learn and use generalizable grammatical abstractions, the interaction between structure and meaning representations in high-dimensional vector spaces, and using multilingual settings to test the limits of abstraction in language models.

I am funded by an NSF GRFP fellowship, and a Stanford Graduate Fellowship.

I did my undergraduate at Berkeley, where I got BAs in Computer Science and in History. My history thesis was based on research at the archives of the League For Democracy in Greece, a London-based solidarity organisation supporting the left in the Greek Civil War. It received the Kirk Underhill Prize.

Talks and News

Georgia Tech Modern Languages Colloquium, Nov 2023 [slides]

I was selected for the 2023 Rising Stars in EECS

Stanford CS 224N, "Insights between NLP and Linguistics" [slides]

Brown Computer Science, Carnegie Mellon LTI, July 2023

Decoding Communication in Nonhuman Species Workshop, June 2023 [slides] [recording]

NYU CAP Lab, Apr 2023

Cornell University C.Psyd Group, Dec 2022

SIGTYP 2022 keynote, July 2022 [slides] [recording]

UT Austin Computational Linguistics Group, April 2022

UC Santa Barbara Computational Linguistics Group, October 2020

Papers

Mission: Impossible Language Models - Julie Kalini, Isabel Papadimitriou , Richard Futrell, Kyle Mahowald, and Christopher Potts (preprint)

Injecting structural hints: Using language models to study inductive biases in language learning - Isabel Papadimitriou and Dan Jurafsky, Findings of EMNLP 2023

Multilingual BERT has an accent: Evaluating English influences on fluency in multilingual models - Isabel Papadimitriou* , Kezia Lopez*, and Dan Jurafsky, Findings of EACL 2023, SIGTYP 2023 [slides]

Oolong: Investigating What Makes Crosslingual Transfer Hard with Controlled Studies - Zhengxuan Wu*, Isabel Papadimitriou*, Alex Tamkin*, EMNLP 2023 [pdf]

The Greek possessive modal eho as a special agentive modality - Isabel Papadimitriou and Cleo Condoravdi, LSA 2023 (poster) [abstract]

When classifying grammatical role, BERT doesn't care about word order... except when it matters - Isabel Papadimitriou , Richard Futrell, and Kyle Mahowald, ACL 2022 (oral presentation) [pdf] [code]

Language, Section 2.1 - Isabel Papadimitriou and Christopher D. Manning

In On the Opportunities and Risks of Foundation Models (full list of co-authors)

Deep Subjecthood: Higher-Order Grammatical Features in Multilingual BERT - Isabel Papadimitriou , Ethan A. Chi, Richard Futrell, and Kyle Mahowald, EACL 2021 (oral presentation) [pdf] [code]

Learning Music Helps You Read: Using transfer to study linguistic structure in language models - Isabel Papadimitriou and Dan Jurafsky, EMNLP 2020 (oral presentation) [pdf] [code]

Teaching

I am TAing CS324H, History of Natural Language Processing, taught by Dan Jurafsky and Chris Manning in Winter 2024

I was a TA for CS224N, Natural Language Processing with Deep Learning, taught by Chris Manning in Winter 2023

I was a TA for the Independent Study in Machine Translation seminar taught by Noah Goodman in Winter 2020

At Berkeley I TAed CS70 Discrete Math and Probability Fall 2015, taught by Satish Rao and Jean Walrand

The template is by Vasilios Mavroudis. Thanks!