Consider how effortlessly you distinguish between “ice cream” and “I scream” in natural speech, a disambiguation that requires your brain to rapidly integrate acoustic evidence in milliseconds. This remarkable computational process represents a fundamental question in cognitive neuroscience. I’m a final-year Software Engineering student investigating these neural mechanisms through computational modeling and brain signal analysis.
I’m currently implementing and extending the Shortlist B model (Norris & McQueen, 2008) to understand how human brains process continuous speech in real-world environments. My research focuses on developing implementations of Bayesian speech recognition models for neuroscience applications. By integrating acoustic information with context representations, these implemented models generate cognitive measures that can be correlated with EEG/MEG data using Temporal Response Function (TRF) analysis.