When humans listen to speech, the acoustic signal that enters the ears is a complex pattern of air pressure fluctuations. Yet, listeners intuitively and almost instantaneously experience meaning in these sounds. My research focuses on the transformations that happen in the brain to enable this.
The goal of my research is to understand and measure how the brain processes speech. I am particularly interested in how people comprehend speech in realistic settings, including continuous, meaningful speech, and speech in noisy backgrounds. For this I primarily work with electrophysiological brain signals (MEG & EEG) and computational models. M/EEG allow us to measure brain activity with millisecond resolution, required for capturing brain responses to rapidly evolving speech signals. Computational models of speech recognition allow us to better understand the transformations necessary for recognizing speech, and they also allow us to make quantitative predictions for brain activity.
This work lays the foundations for better understanding how speech perception is affected in different settings and populations. For example, how does speech processing change with age? How is it affected by a hearing impairment? And how can this guide us to better address challenges faced by these populations.
I use Python to develop tools to make this research possible, and many of those tools are available in the open source libraries MNE-Python and Eelbrain. For an introduction to analyzing M/EEG responses in experiments with continuous designs, such as audiobook listening or movie watching, see our recent eLife paper.