Browse > Article
http://dx.doi.org/10.4218/etrij.2018-0577

Human activity recognition with analysis of angles between skeletal joints using a RGB-depth sensor  

Ince, Omer Faruk (Center for Intelligent and Interactive Robotics, Korea Institute of Science and Technology)
Ince, Ibrahim Furkan (Department of Electronics Engineering, Kyungsung University)
Yildirim, Mustafa Eren (Department of Electronics Engineering, Kyungsung University)
Park, Jang Sik (Department of Electronics Engineering, Kyungsung University)
Song, Jong Kwan (Department of Electronics Engineering, Kyungsung University)
Yoon, Byung Woo (Department of Electronics Engineering, Kyungsung University)
Publication Information
ETRI Journal / v.42, no.1, 2020 , pp. 78-89 More about this Journal
Abstract
Human activity recognition (HAR) has become effective as a computer vision tool for video surveillance systems. In this paper, a novel biometric system that can detect human activities in 3D space is proposed. In order to implement HAR, joint angles obtained using an RGB-depth sensor are used as features. Because HAR is operated in the time domain, angle information is stored using the sliding kernel method. Haar-wavelet transform (HWT) is applied to preserve the information of the features before reducing the data dimension. Dimension reduction using an averaging algorithm is also applied to decrease the computational cost, which provides faster performance while maintaining high accuracy. Before the classification, a proposed thresholding method with inverse HWT is conducted to extract the final feature set. Finally, the K-nearest neighbor (k-NN) algorithm is used to recognize the activity with respect to the given data. The method compares favorably with the results using other machine learning algorithms.
Keywords
activity recognition; dimension reduction; Haar-wavelet transform; K-nearest neighbour; RGB-D sensor;
Citations & Related Records
Times Cited By KSCI : 3  (Citation Analysis)
연도 인용수 순위
1 Md Z Uddin, N. D. Thang, and T.-S. Kim, Human Activity Recognition via 3-D joint angle features and Hidden Markov models, in Proc. Int. Conf. Image Process., Hong Kong, China, Sept. 2010, pp. 713-716.
2 F. Ofli et al., Sequence of the most informative joints (SMIJ): A new representation for human skeletal action recognition, J. Visual Commun. Image Representation 25 (2014), no. 1, 24-38.   DOI
3 Y. Lin and Y. H. Jeon, Random forests and adaptive nearest neighbors, Technical Report No. 1055, University of Wisconsin, 2002.
4 O. F. Ince et al., Human identification using video-based analysis of the angle between skeletal joints, J. Institute Contr. Robot. Syst. 24 (2018), no. 3, 263-270.   DOI
5 L. Piyathilaka and S. Kodagoda, Gaussian mixture based HMM for human daily activity recognition using 3D skeleton features, in Proc. Conf. Industrial Electron. Applicat., Melbourne, Australia, June 2013, pp. 567-572.
6 A. Jalal, S. Kamal, and D. Kim, A depth video sensor-based life-logging human activity recognition system for elderly care in smart indoor environments, Sensors 14 (2014), no. 7, 11735-11756.   DOI
7 A. Jalal et al., Robust human activity recognition from depth video using spatiotemporal multi-fused features, Pattern Recogn. 61 (2017), 295-308.   DOI
8 A. Jalal, S. Kamal, and D. Kim, Shape and motion features approach for activity tracking and recognition from kinect video camera, in Proc. Int. Conf. Adv. Inf. Netw. Applicat. Workshops, Gwangiu, Rep. of Korea, Mar. 2015, pp. 445-450.
9 A. Jalal and Y. Kim, Dense depth maps-based human pose tracking and recognition in dynamic scenes using ridge data, in Proc. Int. Conf. Adv. Video Signal Based Surveillance, Seoul, Rep. of Korea, Aug. 2014, pp. 119-124.
10 M. T. Uddin and M. A. Uddin, Human activity recognition from wearable sensors using extremely randomized trees, in Proc. Int. Conf. Electr. Eng. Inf. Commun. Technol., Dhaka, Bangladesh, May 2015, pp. 769-778.
11 A. Jalal et al., Human activity recognition via recognized body parts of human depth silhouettes for residents monitoring services at smart home, Indoor Built Environ. 22 (2013), no. 1, 271-279.   DOI
12 Y. Zhan and T. J. Kuroda, Wearable sensor-based human activity recognition from environmental background sounds, J. Ambient Intell. Humanized Comput. 5 (2014), no. 1, 77-89.   DOI
13 Z. A. Jalal and I. Uddin, Security Architecture for Third Generation (3G) using GMHS Cellular Network, in Proc. Int. Conf. on Emerging Technol., Islamabad, Pakistan, Nov. 2007, pp. 74-79.
14 A. Jalal and M. A. Zeb, Security Enhancement for E-learning portal, Int. J. Comput. Sci. Netw. Security 8 (2008), no. 3, 41-45.
15 A. Jalal and M. A. Zeb, Collaboration achievement along with performance maintenance in video streaming, in Proc. Int. Conf. Comput. Inf. Technol., Dhaka, Bangladesh, 2007, pp. 369-374.
16 A. Jalal and A. Shahzad, Multiple facial feature detection using vertex-modeling structure, in Proc. IEEE Conf. Interactive Comput. Aided Learn., Villach, Austria, Sept. 2007, pp. 26-28.
17 N. Ravi et al., Activity recognition from accelerometer data, in Proc. Conf. Innovative Applicat. Artif. Intell., Pittsburgh, PA, USA, July 2005, pp. 1541-1546.
18 A. Jalal, S. Kim, and B. J. Yun, Assembled algorithm in the realtime h.263 codec for advanced performance, in Proc. Int. Workshop Enterprise Netw. Comput. Healthcare Industry, Busan, Rep. of Korea, June 2005, pp. 295-298.
19 A. Jalal and S. Kim, Algorithmic implementation and efficiency maintenance of real-time environment using low-bitrate wireless communication, in Proc. IEEE Workshop Softw. Technol. Future Embedded Ubiquitous Syst., Gyeongju, Rep. of Korea, Apr. 2006, pp. 81-88.
20 A. Jalal, S. Kamal, and D. Kim, Individual detection-tracking-recognition using depth activity images, in Proc. Int. Conf. Ubiquitous Robots Ambient Intell., Goyang, Rep. of Korea, Oct. 2015, pp. 450-455.
21 D. Figo et al., Preprocessing techniques for context recognition from accelerometer data, Personal Ubiquitous Comput. 14 (2010), 645-662.   DOI
22 C. B. Erdas et al., Integrating features for accelerometer-based activity recognition, Procedia Comput. Sci. 98 (2016), 522-527.   DOI
23 D. Koller et al., Real-time vision-based camera tracking for augmented reality applications, in Proc. ACM Symp. Virtual Reality Softw. Technol., Lausanne, Switzerland, Sept. 1997, pp. 87-94.
24 A. Jalal, M. Z. Uddin, and T. Kim, Depth video-based human activity recognition system using translation and scaling invariant features for life logging at smart home, IEEE Trans. Consumer Electron. 58 (2012), no. 3, 863-871.   DOI
25 A. Jalal and S. Kamal. Real-time life logging via a depth silhouette-based human activity recognition system for smart home services, in Proc. Int. Conf. Adv. Video Signal Based Surveillance, Seoul, Rep. of Korea, Aug. 2014, pp. 74-80.
26 H. Wu et al., Human activity recognition based on the combined SVM&HMM, in Proc. Int. Conf. Inf. Auto., Hailar, China, July 2014, pp. 219-224.
27 S. Kamal, A. Jalal, and D. Kim, Depth images-based human detection, tracking and activity recognition using spatiotemporal features and modified HMM, J. Electr. Eng. Technol. 11 (2016), no. 6, 1857-1862.   DOI
28 A. Jalal, Y. Kim, and D. Kim, Ridge body parts features for human pose estimation and recognition from RGB-D video data, in Proc. Int. Conf. Comput., Commun. Netw. Technol., Hefei, China, July 2014, pp. 1-6.
29 A. Jalal et al., Human activity recognition via the features of labeled depth body parts, Lecture Notes Comput. Sci. 7251 (2012), 246-249.
30 A. Jalal, T. K. Jeong, and T. S. Kim, Development of a life logging system via depth imaging-based human activity recognition for smart homes, in Proc. Int. Symp. Sustainable Healthy Buildings, Seoul, Rep. of Korea, Sept. 2012, pp. 91-95.
31 J. L. Johnson, Design of experiments and progressively sequenced regression are combined to achieve minimum data sample size, Int. J. Hydromechatronics 1 (2018), no. 3, 308-331.   DOI
32 L. Alberto, S. Vincentelli, and B. Vigna, Autonomous vehicles: A playground for sensors, in Proc. Int. Workshop Adv. Sens. Interfaces, Vieste, Italy, June 2017, p. 2.
33 J. L. Johnson, Reynolds stress statistics in the near nozzle region of coaxial swirling jets, Int. J. Hydromechatronics 1 (2018), no. 3, 332-349.   DOI
34 V. Lumelsky, Whole-body robot sensing and human-robot interaction, in Proc. Int. Symp. Micro-NanoMechatronics Human Sci., Nagoya, Japan, Nov. 2012, pp. 155-155.
35 S. Huang et al., Wear calculation of sandblasting machine based on EDEM-FLUENT coupling, Int. J. Hydromechatronics 1 (2018), no. 4, 447-459.   DOI
36 Q. Huang, J. Yang, and Y. Qiao, Person re-identification across multi-camera system based on local descriptors, in Proc. Int. Conf. Distribut. Smart Cameras, Hong Kong, China, Oct. 2012, pp. 1-6.
37 A. Farooq, A. Jalal, and S. Kamal, Dense RGB-D map-based human tracking and activity tecognition using skin joints features and self-organizing map, KSII Trans. Internet Inf. Syst. 9 (2015), no. 5, 1856-1869.   DOI
38 H. Yoshimoto, N. Date, and S. Yonemoto, Vision-based real-time motion capture system using multiple cameras, in Proc. IEEE Int. Conf. Multisensor Fusion Integr. Intell. Syst., Tokyo, Japan, Aug. 2003, pp. 247-251.
39 A. Jalal and S. Kim, Global security using human face understanding under vision ubiquitous architecture system, World Academy Sci. Eng. Technol. 2 (2008), no. 1, 160-164.
40 F. Farooq, J. Ahmed, and L. Zheng, Facial expression recognition using hybrid features and self-organizing maps, in Proc. IEEE Int. Conf. Multimedia Expo, Hong Kong, China, July 2017, pp. 409-414.
41 M. Ye and R. Yang, Real-time simultaneous pose and shape estimation for articulated objects using a single depth camera, in Proc. IEEE Conf. Computer Vision Pattern Recogn., Columbus, OH, USA, June 2014, pp. 2345-2352.
42 J. Shotton et al., Real-time human pose recognition in parts from single depth images, Machine Learning for Computer Vision, Studies in Computational Intelligence 411 (2013), 119-135.   DOI
43 M. Ding and G. Fan, Articulated and generalized Gaussian kernel correlation for human pose estimation, IEEE Trans. Image Process. 25 (2016), no. 2, 776-789.   DOI
44 Y. Hbali et al., Skeleton-based human activity recognition for elderly monitoring systems, IET Comput. Vision 12 (2018), no. 1, 16-26.   DOI
45 A. Jalal, M. Maria, and M. Sidduqi, Robust spatio-temporal features for human interaction recognition via artificial neural network, in Proc. Int. Conf. Frontiers Inf. Technol., Islamabad, Pakistan, Dec. 17-19, 2018, pp. 218-223.
46 A. Jalal, S. Kamal, and D. Kim, A depth video-based human detection and activity recognition using multi-features and embedded hidden Markov models for health care monitoring system, Int. J. Interactive Multimedia Artif. Intell. 4 (2017), no. 4, 54-62.   DOI
47 T. N. Nguyen and N. Q. Ly, Abnormal activity detection based on dense spatial-temporal features and improved one-class learning, in Proc. Int. Symp. Inf. Commun. Technol., Nha Trang City, Viet Nam, Dec. 2017, pp. 370-377.
48 D. Singh and C. K. Mohan, Graph formulation of video activities for abnormal activity recognition, Pattern Recogn. 65 (2017), 265-272.   DOI
49 F. Sikder and D. Sarkar, Log-sum distance measures and its application to human-activity monitoring and recognition using data from motion sensors, IEEE Sensors J. 17 (2017), no. 14, 4520-4533.   DOI
50 Y. Chen and C. Shen, Performance analysis of smartphone-sensor behavior for human activity recognition, IEEE Access 5 (2017), 3095-3110.   DOI
51 A. Jalal et al., Wearable sensor-based human behavior understanding and recognition in daily life for smart environments, in Proc. Int. Conf. Frontiers Inf. Technol., Islamabad, Pakistan, Dec. 17-19, 2018, pp. 105-110.
52 X. Luo et al., Abnormal activity detection using pyroelectric infrared sensors, Sensors 16 (2016), 1-17.   DOI
53 A. Karpathy et al., Large-scale video classification with convolutional neural networks, in Proc. IEEE Conf. Comput. Vision Pattern Recogn., Columbus, OH, USA, June 23-28, 2014, pp. 1725-1732.
54 A. Subasi et al., IoT based mobile healthcare system for human activity recognition, in Proc. Learn. Technol. Conf. (L&T), Jeddah, Saudi Arabia, Feb. 2018, pp. 29-34.
55 K. Wang et al., 3D human activity recognition with reconfigurable convolutional neural networks, in Proc. ACM Int. Conf. Multimedia, Orlando, FL, USA, Nov. 2014, pp. 97-106.
56 K. Simonyan and A. Zisserman, Very deep convolutional networks for large-scale image recognition, arXiv preprint arXiv:1409.1556. 2014.
57 D. Tao, Y. Wen, and R. Hong, Multicolumn bidirectional long short-term memory for mobile devices-based human activity recognition, IEEE Internet Things J. 3 (2016), no. 6, 1124-1134.   DOI
58 N. D. Thang et al., Estimation of 3-D human body posture via co-registration of 3-D human model and sequential stereo information, Appl. Intell. 35 (2011), no. 2, 163-177.   DOI