What is it about?
Purpose We present a preliminary solution to address the problem of estimating human pose and trajectory by an aerial robot with a monocular camera in near real time. Design/methodology/approach The distinguishing feature of our solution is a dynamic classifier selection architecture. Each video frame is corrected for perspective using projective transformation. Then, a silhouette is extracted as a Histogram of Oriented Gradients (HOG). The HOG is then classified using a dynamic classifier. A class is defined as a pose-viewpoint pair, and a total of 64 classes are defined to represent a forward walking and turning gait sequence. The dynamic classifier consists of (i) a Support Vector Machine (SVM) classifier C64 that recognizes all 64 classes, and (ii) 64 SVM classifiers that recognize 4 classes each - these 4 classes are chosen based on the temporal relationship between them, dictated by the gait sequence. Findings Our solution provides three main advantages: (i) Classification is efficient due to dynamic selection (4-class vs. 64-class classification). (ii) Classification errors are confined to neighbors of the true viewpoints. This means a wrongly estimated viewpoint is at most an adjacent viewpoint of the true viewpoint, enabling fast recovery from incorrect estimations. (iii) The robust temporal relationship between poses is used to resolve the left-right ambiguities of human silhouettes. Originality/value Experiments conducted on both fronto-parallel videos and aerial videos confirm our solution can achieve accurate pose and trajectory estimation for these different kinds of videos. For example, our ``walking on an 8-shaped path'' dataset (1652 frames) can achieve the following estimation accuracies: 85% for viewpoints and 98.14% for poses.
Featured Image
Read the Original
This page is a summary of: Human motion analysis from UAV video, International Journal of Intelligent Unmanned Systems, May 2018, Emerald,
DOI: 10.1108/ijius-10-2017-0012.
You can read the full text:
Contributors
The following have contributed to this page