What is it about?
The present study not only supports a hierarchical model of voice recognition, that is, that there exist distinct voice processing functions with distinct anatomical locations, but, critically, it also characterizes the neural mechanisms of these processing stages: our results provide evidence that both long-term acoustic and identity processing mechanisms are based on mean-based neural coding, and that these long-term codes are maintained in voice-selective regions of the STS and the IFC.
Featured Image
Why is it important?
We propose that the right middle STS processes incoming voice stimuli with respect to their distance from the representation of a supra-individual “mean voice” category (i.e., the average across talkers of the listener's recent voice-acoustic history). This representation does not seem to be biased by voice identity information, rather it collapses across individual voices. The right IFC, in contrast, processes voice stimuli with respect to their distance from representations of “individual mean voices” that are the average of the listener's recent memories of the voices of specific individuals. According to this view, the IFC maintains multiple “individual mean voice” representations, one for each voice remembered. In this study, we presented the first evidence for this multilevel long-term mean-based coding in voice-selective cortical regions.
Read the Original
This page is a summary of: Mean-based neural coding of voices, NeuroImage, October 2013, Elsevier,
DOI: 10.1016/j.neuroimage.2013.05.002.
You can read the full text:
Contributors
The following have contributed to this page