Computational Syntax

To what extent are syntactic biases innate in the human language faculty? And how are they deployed during language processing and language learning? This research program seeks to use recent developments in language modeling to provide new empirical insight into these questions. By assessing how model architecture and training data affect the learning outcomes for contemporary neural network architectures, we can uncover the computational properties that guide language use in humans.


Representative Publications
[•] Ethan Wilcox, Peng Qian, Richard Futrell, Ryosuke Kohita, Roger P. Levy, and Miguel Ballesteros. 2020. Structural Supervision Improves Few-Shot Learning and Syntactic Generalization in Neural Language Models. EMNLP 2020
[•] Jennifer Hu, Jon Gauthier, Ethan Wilcox, Peng Qian, and Roger Levy. A systematic assessment of syntactic generalization in neural language models. ACL 2020
[•] Wilcox, Ethan; Levy, Roger; Morita, Takashi; Futrell, Richard. 2018. What do RNN Language Models Learn about the Filler-Gap Dependency? Proceedings of Blackbox NLP at EMNLP 2018


Experimental Semantics: Presupposition and Exhaustivity

The interpretation of linguistics utterances depends on both the logical form of the utterance (its semantics), as well as the context in which it was produced. In this work I use methods from experimental psycholinguistics as well as Bayesian modeling to develop and evaluate theories of presupposition, presupposition accommodation, and exhaustivity. Specifically, I ask how listeners cooperatively adjust their representations of common ground to accommodate assumptions that have been made by speakers during discourse.

Representative Publications
[•] Ethan Wilcox​ and Benjamin Spector. 2019. ​The Role of Prior Beliefs in The Rational Speech Act Model of Pragmatics: Exhaustivity as a Case Study​. Proceedings of CogSci 2019


Neural Networks as Models of Human Language Processing

Neural networks are everywhere, and have achieved state of the art performance on many NLP tasks. But to what extent do they serve as good models for human linguistic processing? This work seeks to benchmark neural network models by treating them like subjects in a psycholinguistic experiment. By deriving word-by-word predictions, and comparing these to classic studies of human word-by-word reading times, we can uncover the ways in which models' learned representations are driving behavior that is similar to (and different from) human behavior.


Representative Publications
[•] Ethan Wilcox, Jon Gauthier, Jennifer Hu, Peng Qian and Roger Levy. 2020. On the Predictive Power of Neural Language Models for Human Real-Time Comprehension Behavior. CogSci 2020
[•] Richard Futrell, Ethan Wilcox, Takashi Morita, Miguel Ballesteros, Roger Levy. 2019. Neural Language Models as Psycholinguistic Subjects: Representation of Syntactic State. NAACL-HLT 2019