What is it about?
This paper introduces a method for identifying human actions in depth action videos. We first generate the corresponding Motion History Images (MHIs) and Static History Images (SHIs) to an action video by utilizing the so-called 3D Motion Trail Model (3DMTM). We then extract the Gradient Local Auto-Correlations (GLAC) features from the MHIs as well as SHIs to characterize the action video. Next, we concatenate the set of MHIs based GLAC features with the set of SHIs based GLAC features to gain a single action representation vector. Thus, the computed feature vectors in all action samples are passed to the l2-regularized Collaborative Representation Classifier (l2-CRC) for recognizing multiple human actions effectively. Experimental evaluations on three action datasets, MSR-Action3D, DHA and UTD-MHAD, reveal that the proposed recognition system attains superiority over the state-of-the-art approaches considerably. In addition, the computational efficiency test indicates the real-time compatibility of the system.
Featured Image
Read the Original
This page is a summary of: Human action recognition using MHI and SHI based GLAC features and Collaborative Representation Classifier, Journal of Intelligent & Fuzzy Systems, April 2019, IOS Press,
DOI: 10.3233/jifs-181136.
You can read the full text:
Contributors
The following have contributed to this page