I use computational modeling technology to understand how language processing and language learning happens in the human mind. Some of the big questions that my research addresses are: What computations does our mind perform when we listen to a sentence? What are the features of the human learning algorithm that allow us to learn langauge so rapidly? What is universal about the way we process language, regardless of what individual language(s) we speak? And in the age of artificial intelligence, what is unique about the way that people process language?

Position Update: Fall 2024, I will be joining Georgetown University, in the Linguistics Department, as an Assistant Professor of Computational Linguistics. Stay tuned for more news!

Currently, I am an ETH Postdoctoral Fellow at the ETH in Zürich, Switzerland. I am affiliated with Rycolab and the Language Reasoning and Education Lab, both in the Machine Learning Institute. Before moving to Zürich, I was a PhD student in the Department of Linguistics at Harvard University. While there, I was affiliated with the Computational Psycholinguistics Laboratory at MIT and the Meaning and Modality Laboratory at Harvard. I did my undergraduate work at Stanford University, in the Symbolic Systems program, studying Computational Linguistics, as well as in the Slavic Literature department, where I wrote my honors thesis on the history of the Esperanto movement in the USSR.

News and Updates
👉 I am giving a talk at the ICML Workshop on Large Language Models and Cognition in July 2024! My talk is entitled "Language Modeling in People and Machines"
👉 I am giving at talk at the Workshop on Using Artificial Neural Networks for Studying Human Language Learning and Processing at the ILLC in Amsterdam. The title of my talk is "Using artificial neural networks to study human language processing: Two case studies and a warning."
👉 I have accepted a faculty position at Georgetown University, in the department of linguistics! I will be starting there in August, 2024.
👉 I was awarded two outstanding paper awards at EMNLP in Singapore, one for "Language Model Quality Correlates with Psychometric Predictive Power in Multiple Languages" and one for "Revisiting the Optimality of Word Lengths,"
👉 I am presenting two posters at AMLaP 2023, Mouse tracking while reading (MoTR): A new incremental incremental processing measurement and An information-theoretic explanation of regressions during reading
👉 I am giving a talk at the The Fourth International Conference on Theoretical East Asian Psycholinguistics (ICTEAP-4) on August 18, 2023. The title of my talk is Language models as cognitive models: The cases of syntactic generalization and real-time language comprehension
👉 My research is featured in this New York Times Article about the BabyLM Challenge! Check it out to learn how we're trying to make AI more accessible and also more humanlike.