What is it about?
Wearable sensor-based Human Action Recognition (HAR) has achieved remarkable success recently. However, the accuracy performance of wearable sensor-based HAR is still far behind the ones from the visual modalities-based system (i.e., RGB video, skeleton, and depth). In this study, we applied knowledge distillation (KD) to eventually improve the accuracy performance of wearable sensor=based HAR systems.
Featured Image
Photo by Md. Alamin Mir on Unsplash
Why is it important?
1. We propose a new multi-teacher approach to construct multiple teacher models using skeleton (teacher) and accelerometer (student) data modalities. In this way, the teacher models can also understand the characteristic of the student modality data so that teacher models can generate models which are easier for student models to mimic. 2. We design an effective progressive learning (PL) scheme to eliminate the performance gap between teacher and student models. 3. To the best of our knowledge, this is the first study conducting the cross-modal KD model from the skeleton data domain to the wearable sensor data domain
Perspectives
Read the Original
This page is a summary of: Progressive Cross-modal Knowledge Distillation for Human Action Recognition, October 2022, ACM (Association for Computing Machinery),
DOI: 10.1145/3503161.3548238.
You can read the full text:
Resources
Contributors
The following have contributed to this page