I study the neural basis of language and speech processing. When humans listen to speech, the acoustic signal that enters the ears is a complex pattern of air pressure fluctuations. Yet, listeners intuitively and almost instantaneously experience meaning in these sounds. My research focuses on the transformations that happen in the brain to enable this.
To study this I mainly use MEG and EEG with reverse correlation. Reverse correlation allows us to think of brain responses as a continuous transformation of the speech signal, rather than relying on pre-defined events in the stimuli. It also allows us to disentangle responses related to different levels of processing, such as the formation of auditory and lexical representations.
I use Python to develop tools to make this research possible, and many of those tools are available in the open source libraries MNE-Python and Eelbrain.
PhD in Psychology, Cognition and Perception, 2016
New York University
Licentiate (Master of Science) in Neuropsychology, 2010
University of Zurich
Prediction is thought to play a central role in efficient speech processing. We used MEG responses to continuous speech to distinguish between competing theories of how such predictions are implemented neurally. We found evidence for multiple predictive models that are engaged in parallel, and use different amounts of context.
Short review of brain responses to continuous speech (“speech tracking”), with a focus on MEG/EEG. Part of a themed issue on the Physiology of Mammalian Hearing.
We isolated brain responses related to the transformation from acoustic to linguistic representations of continuous speech by modeling word recognition with information theory. We then demonstrate that, in the presence of multiple talkers, only attended speech is processed lexically. This is evidence against the hypothesis that words in background speech are processed preconsciously.