3D human action analysis and recognition through GLAC descriptor on 2D motion and static posture images

Bulbul, Mohammad Farhad, Islam, Saiful, Ali, Hazrat

arXiv.org Machine Learning 

Farhad Bulbul is with the Department of Mathematics, Jessore University of Science and Technology, Bangladesh (email: farhad@just.edu.bd). Saiful Islam is with the Department of Mathematics, Bangabandhu Sheikh Mujibur Rahman Science & Technology University, Bangladesh. Dr. Hazrat Ali is with the Department of Electrical and Computer Engineering, COMSATS University Islamabad, Abbottabad Campus, Pakistan (email: hazratali@cuiatd.edu.pk). Abstract-- In this paper, we present an approach for identification of actions within depth action videos. First, we process the video to get motion history images (MHIs) and static history images (SHIs) corresponding to an action video based on the use of 3D Motion Trail Model (3DMTM). We then characterize the action video by extracting the Gradient Local Auto-Correlations (GLAC) features from the SHIs and the MHIs. The two sets of features i.e., GLAC features from MHIs and GLAC features from SHIs are concatenated to obtain a representation vector for action. Finally, we perform the classification on all the action samples by using the l2-regularized Collaborative Representation Classifier (l2-CRC) to recognize different human actions in an effective way. We perform evaluation of the proposed method on three action datasets, MSR-Action3D, DHA and UTD-MHAD. Through experimental results, we observe that the proposed method performs superior to other approaches. I. INTRODUCTION Research in human action recognition (HAR) is considered as one of the most interesting domains of computer vision. The action recognition system is being extensively applied in human security system, medical science, social awareness, and entertainment [1], [2], [3], [4].. Indeed, to develop an applicable action recognition system, researchers still need to win against the odds due to diversity in human body sizes, appearances, postures, motions, clothing, camera motions, viewing angles, and illumination. In the early stage, the human action recognition system was developed by researchers based on RGB data [5], [6], [7], [8].

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found