Office: Room 107 (2016 Sheridan Road)
Language is an extraordinarily complex system, yet comprehending one’s native language is intuitively simple and effortless. In the auditory domain, we routinely achieve full understanding of an interlocutor – and often even finish their sentences – despite noisy environments and sloppy pronunciation. When reading, we comprehend text without difficulty despite moving our eyes at about 200ms per word. How do we manage such feats?
My research seeks to understand the remarkable efficiency of language comprehension, using the tools of probability theory and statistical decision theory as explanatory frameworks. My work suggests that we achieve communicative efficiency by utilizing rich, structured probabilistic information about language: leveraging linguistic redundancy to fill in details absent from the perceptual signal, to spend less time processing more frequent material, and to make predictions about language material not yet encountered.
In pursuing these questions, I use a diverse set of methodologies. I make heavy use of computational modeling, using techniques from machine learning, computational linguistics, reinforcement learning, and information theory. In addition, I perform a wide range of empirical work, including both controlled experiments (especially eye tracking) and statistical analyses of large, naturalistic datasets.