Short review of brain responses to continuous speech ("speech tracking"), with a focus on MEG/EEG. Part of a themed issue on the [Physiology of Mammalian Hearing](https://www.sciencedirect.com/journal/current-opinion-in-physiology/vol/18/suppl/C).
Humans are remarkably skilled at listening to one speaker out of an acoustic mixture of several speech sources. Two speakers are easily segregated, even without binaural cues, but the neural mechanisms underlying this ability are not well understood. …
MEG responses to continuous speech show evidence that the acoustic signal is used for identifying words withing only ~100 ms. In a cocktail-party listening task, the corresponding responses are restricted to the attended speaker, suggesting that …
Previous research has found that, paradoxically, while older adults have more difficulty comprehending speech in challenging circumstances than younger adults, their brain responses track the envelope of the acoustic signal more robustly. Here we …
Human experience often involves continuous sensory information that unfolds over time. This is true in particular for speech comprehension, where continuous acoustic signals are processed over seconds or even minutes. We show that brain responses to …
A critical component of comprehending language in context is identifying the entities that individual linguistic expressions refer to. While previous research has shown that language comprehenders resolve reference quickly and incrementally, little …
Successful language comprehension critically depends on our ability to link linguistic expressions to the entities they refer to. Without reference resolution, newly encountered language cannot be related to previously acquired knowledge. The human …