I study the neural basis of language, and speech processing in particular. When humans listen to speech, the acoustic signal that enters the ears is a complex pattern of air pressure fluctuations. Yet, listeners intuitively and almost instantaneously experience meaning in these sounds. My research focuses on the transformations that happen in the brain to enable this.
I mainly use MEG and EEG with reverse correlation. Reverse correlation allows us to analyse brain responses as a continuous transformation of the speech signal, rather than relying on pre-defined events in the stimuli.
PhD in Psychology, Cognition and Perception, 2016
New York University
Licentiate (Master of Science) in Neuropsychology, 2010
University of Zurich
Short review of brain responses to continuous speech (“speech tracking”), with a focus on MEG/EEG. Part of a themed issue on the Physiology of Mammalian Hearing.
MEG responses to continuous speech show evidence that the acoustic signal is used for identifying words withing only ~100 ms. In a cocktail-party listening task, the corresponding responses are restricted to the attended speaker, suggesting that lexical processing of speech can be strictly determined by selective attention.