What is it about?

Action observation typically recruits visual areas and dorsal and ventral sectors of the parietal and premotor cortex. This network has been collectively termed as extended action observation network (eAON). Within this network, the elaboration of kinematic aspects of biological motion is crucial. Previous studies investigated these aspects by presenting subjects with point-light displays (PLDs) videos of whole-body movements, showing the recruitment of some of the eAON areas. However, studies focused on cortical activation during observation of PLDs grasping actions are lacking. In the present functional magnetic resonance imaging (fMRI) study, we assessed the activation of eAON in healthy participants during the observation of both PLDs and fully visible hand grasping actions, excluding confounding effects due to low-level visual features, motion, and context. Results showed that the observation of PLDs grasping stimuli elicited a bilateral activation of the eAON. Region of interest analyses performed on visual and sensorimotor areas showed no significant differences in signal intensity between PLDs and fully visible experimental conditions, indicating that both conditions evoked a similar motor resonance mechanism. Multivoxel pattern analysis (MVPA) revealed significant decoding of PLDs and fully visible grasping observation conditions in occipital, parietal, and premotor areas belonging to eAON. Data show that kinematic features conveyed by PLDs stimuli are sufficient to elicit a complete action representation, suggesting that these features can be disentangled within the eAON from the features usually characterizing fully visible actions. PLDs stimuli could be useful in assessing which areas are recruited, when only kinematic cues are available, for action recognition, imitation, and motor learning.

Featured Image

Why is it important?

Activation of eAON evoked by PLDs stimuli, in particular, in parietal and premotor areas, demonstrates that motion features are sufficient to determine goal encoding without any confounding effect relative to the observation of contextual information. In addition, the use of machine learning methods allowed us to assess which areas of the eAON play a key role in disentangling between PLDs and fully visible stimuli and, together with data from literature, whether they encode specific features of the observed grasping action. Based on the present data, in the future, it could be interesting to investigate whether kinematic information provided by PLDs stimuli is exploited during motor learning tasks to improve some aspects of action execution such as precise hand/finger configurations.

Read the Original

This page is a summary of: Decoding point‐light displays and fully visible hand grasping actions within the action observation network, Human Brain Mapping, May 2022, Wiley,
DOI: 10.1002/hbm.25954.
You can read the full text:

Read

Contributors

The following have contributed to this page