I study the neural basis of language and speech processing. When humans listen to speech, the acoustic signal that enters the ears is a complex pattern of air pressure fluctuations. Yet, listeners intuitively and almost instantaneously experience meaning in these sounds. My research focuses on the transformations that happen in the brain to enable this.
The goal of my research is to understand and measure how the brain processes speech. I am particularly interested in how people comprehend speech in realistic settings, including continuous, meaningful speech, and speech in noisy backgrounds. For this I primarily work with electrophysiological brain signals ( MEG & EEG) and computational models. M/EEG allow us to measure brain activity with millisecond resolution, required for capturing brain responses to rapidly evolving speech signals. Computational models of speech recognition allow us to better understand the transformations necessary for recognizing speech, and they also allow us to make quantitative predictions for brain activity.
This work lays the foundations for better understanding how speech perception is affected in different settings and populations. For example, how does speech processing change with age? How is it affected by a hearing impairment? And how can this guide us to better address challenges faced by these populations.
I use Python to develop tools to make this research possible, and many of those tools are available in the open source libraries MNE-Python and Eelbrain. For an introduction to analyzing M/EEG responses in experiments with continuous designs, such as audiobook listening or movie watching, see our recent eLife paper.
PhD in Psychology, Cognition and Perception, 2016
New York University
Licentiate (Master of Science) in Neuropsychology, 2010
University of Zurich
Prediction is thought to play a central role in efficient speech processing. We used MEG responses to continuous speech to distinguish between competing theories of how such predictions are implemented neurally. We found evidence for multiple predictive models that are engaged in parallel, and use different amounts of context.
Short review of brain responses to continuous speech (“speech tracking”), with a focus on MEG/EEG. Part of a themed issue on the Physiology of Mammalian Hearing.
We isolated brain responses related to the transformation from acoustic to linguistic representations of continuous speech by modeling word recognition with information theory. We then demonstrate that, in the presence of multiple talkers, only attended speech is processed lexically. This is evidence against the hypothesis that words in background speech are processed preconsciously.