I study the neural basis of language and speech processing. When humans listen to speech, the acoustic signal that enters the ears is a complex pattern of air pressure fluctuations. Yet, listeners intuitively and almost instantaneously experience meaning in these sounds. My research focuses on the transformations that happen in the brain to enable this.
To study this I mainly use MEG and EEG with reverse correlation. Reverse correlation allows us to think of brain responses as a continuous transformation of the speech signal, rather than relying on pre-defined events in the stimuli. It also allows us to disentangle responses related to different levels of processing, such as the formation of auditory and lexical representations.
I use Python to develop tools to make this research possible, and many of those tools are available in the open source libraries MNE-Python and Eelbrain.
PhD in Psychology, Cognition and Perception, 2016
New York University
Licentiate (Master of Science) in Neuropsychology, 2010
University of Zurich
We show that preditive processing occurs during speech perception in parallel with different levels of context. This has implications for psycholinguistic theories on the sequence of representations that are formed during speech recognition.
Short review of brain responses to continuous speech (“speech tracking”), with a focus on MEG/EEG. Part of a themed issue on the Physiology of Mammalian Hearing.
MEG responses to continuous speech show evidence that the acoustic signal is used for identifying words withing only ~100 ms. In a cocktail-party listening task, the corresponding responses are restricted to the attended speaker, suggesting that lexical processing of speech can be strictly determined by selective attention.