I use computational modeling technology to understand how language processing happens in the human mind. Some of the big questions that my research addresses are: What computations does our mind perform when we listen to a sentence? What is universal about the way we process language, regardless of what individual language(s) we speak? And in the age of artificial intelligence, what is unique about the way that people process language?

Currently, I am an ETH Postdoctoral Fellow at the ETH in Zürich, Switzerland. I am affiliated with Rycolab and the Language Reasoning and Education Lab, both in the Machine Learning Institute. Before moving to Zürich, I was a PhD student in the Department of Linguistics at Harvard University. While there, I was affiliated with the Computational Psycholinguistics Laboratory at MIT and the Meaning and Modality Laboratory at Harvard. I did my undergraduate work at Stanford University, in the Symbolic Systems program, studying Computational Linguistics, as well as in the Slavic Literature department, where I wrote my honors thesis on the history of the Esperanto movement in the USSR.

News and Updates
👉 I was awarded two outstanding paper awards at EMNLP in Singapore, one for "Language Model Quality Correlates with Psychometric Predictive Power in Multiple Languages" and one for "Revisiting the Optimality of Word Lengths,"
👉 I am presenting two posters at AMLaP 2023, Mouse tracking while reading (MoTR): A new incremental incremental processing measurement and An information-theoretic explanation of regressions during reading
👉 I am giving a talk at the The Fourth International Conference on Theoretical East Asian Psycholinguistics (ICTEAP-4) on August 18, 2023. The title of my talk is Language models as cognitive models: The cases of syntactic generalization and real-time language comprehension
👉 My research is featured in this New York Times Article about the BabyLM Challenge! Check it out to learn how we're trying to make AI more accessible and also more humanlike.