What is it about?
Attention helps us distinguish important speech from surrounding noise. Traditionally it's been assumed that attention achieves this by tuning neural processes to relevant sounds, i.e., like you tune into a particular radio frequency. Using novel integration of EEG data (good temporal resolution) and fMRI data (good spatial resolution) we show that selective attentional mechanisms in natural scenes are more complex than assumed, due to being influenced by expectations, experiences, and listening conditions (see video at the bottom of the page).
Featured Image
Photo by Franco Antonio Giovanella on Unsplash
Why is it important?
Our findings highlight two key insights into the dynamics of auditory attention. Firstly, attention rapidly facilitates perception by integrating and updating sensory expectations (i.e., enhances predictive coding). Secondly, attention operates on slower timescales, potentially reflecting plastic changes in the auditory regions of the brain. We suggest both mechanisms act to optimise the differentiation between relevant and irrelevant sounds in the brain. Our insights not only enhance our understanding of human cognition but also have practical implications, such as improving AI transcription and sensory aids in handling challenging noisy speech conditions.
Perspectives
Read the Original
This page is a summary of: Attention to audiovisual speech shapes neural processing through feedback-feedforward loops between different nodes of the speech network, PLoS Biology, March 2024, PLOS,
DOI: 10.1371/journal.pbio.3002534.
You can read the full text:
Contributors
The following have contributed to this page