References
- T. Singh and D. K. Vishwakarma, Video benchmarks of human action datasets: A review, Art. Intell. Rev. 52 (2018), 1107-1154. https://doi.org/10.1007/s10462-018-9651-1
- C. Dhiman and D. K. Vishwakarma, A review of state-of-the-art techniques for abnormal human activity recognition, Eng. Appl. Artif. Intel. 77 (2018), 21-45. https://doi.org/10.1016/j.engappai.2018.08.014
- D. K. Vishwakarma, A two-fold transformation model for human action recognition using decisive pose, Cognitive Syst. Res. 61 (2020), 1-13. https://doi.org/10.1016/j.cogsys.2019.12.004
- J. K. Aggarwal and L. Xia, Human activity recognition from 3D data: A review, Pattern Recognit. Lett. 48 (2014), 70-80. https://doi.org/10.1016/j.patrec.2014.04.011
- C. Dhiman and K. D. Vishwakarma, View-invariant deep architecture for human action recognition using two-stream motion and shape temporal dynamics, IEEE Trans. Image Process. 29 (2020), 3835-3844. https://doi.org/10.1109/TIP.2020.2965299
- D. Vaufreydaz, W. Johal, and C. Combe, Starting engagement detection towards a companion robot using multimodal features, Rob. Auton. Syst. 75 (2016), 4-16. https://doi.org/10.1016/j.robot.2015.01.004
- D. K. Vishwakarma and C. Dhiman, A unified model for human activity recognition using spatial distribution of gradients and difference of Gaussian kernel, Visual Comput. 35 (2019), 1595-1613. https://doi.org/10.1007/s00371-018-1560-4
- J. Shotton et al., Real-time human pose recognition in parts from single depth images, in Proc. Conf. Comput. Vis. Pattern Recognit. (Colorado Springs, CO, USA), June 2011, pp. 1297-1304.
- S. Susan et al., New shape descriptor in the context of edge continuity, CAAI Trans. Intell. Technol. 4 (2019), no. 2, 101-109. https://doi.org/10.1049/trit.2019.0002
- T. Wiens, Engine speed reduction for hydraulic machinery using predictive algorithms, Int. J. Hydromechatronics 2 (2019), no. 1, 16-31. https://doi.org/10.1504/ijhm.2019.098949
- Y. Tingting et al., Three-stage network for age estimation, CAAI Trans. Intell. Technol. 4 (2019), no. 2, 122-126. https://doi.org/10.1049/trit.2019.0017
- C. Zhu and D. Miao, Influence of kernel clustering on an RBFN, CAAI Trans. Intell. Technol. 4 (2019), no. 4, 255-260. https://doi.org/10.1049/trit.2019.0036
- S. Osterland and J. Weber, Analytical analysis of single-stage pressure relief valves, Int. J. Hydromechatronics 2 (2019), no. 1, 32-53. https://doi.org/10.1504/ijhm.2019.098951
- M. Shokri and K. Tavakoli, A review on the artificial neural network approach to analysis and prediction of seismic damage in infrastructure, Int. J. Hydromechatronics 2 (2019), no. 4, 178-196. https://doi.org/10.1504/ijhm.2019.104386
- M. Mahmood, A. Jalal, and K. Kim, WHITE STAG model: Wise human interaction tracking and estimation (WHITE) using spatio-temporal and angular-geometric (STAG) descriptors, Multimed. Tools Appl. 79 (2020), 6919-6950. https://doi.org/10.1007/s11042-019-08527-8
- K. Kim, A. Jalal, and M. Mahmood, Vision-based human activity recognition system using depth silhouettes: A smart home system for monitoring the residents, J. Electr. Eng. Technol. 14 (2019), 2567-2573. https://doi.org/10.1007/s42835-019-00278-8
- A. Jalal and S. Kamal, Real-time life logging via a depth silhouette-based human activity recognition system for smart home services, in Proc. IEEE Int. Conf. Adv. Video Signal Based Surveillance (AVSS), (Seoul, South Korea), Aug. 2014, pp. 74-80
- A. Nadeem, A. Jalal, and K. Kamal, Human actions tracking and recognition based on body parts detection via artificial neural network, in Proc. Int. Conf. Adv. Comput. Sci. (Lahore, Pakistan), Feb. 2020.
- A. Jalal et al., Human activity recognition via recognized body parts of human depth silhouettes for residents monitoring services at smart home, Indoor Built Environ. 22 (2013), no. 1, 271-279. https://doi.org/10.1177/1420326X12469714
- M. A. Quaid and A. Jalal, Wearable sensors based human behavioral pattern recognition using statistical features and reweighted genetic algorithm, Multimed. Tools Appl. 79 (2020), 6061-6083. https://doi.org/10.1007/s11042-019-08463-7
- A. Jalal, S. Kamal, and D. Kim, Shape and motion features approach for activity tracking and recognition from kinect video camera, in Proc. Int. Conf. Adv. Inf. Netw. Appl. Workshops (Gwangju, South Korea), Mar. 2015.
- A. Jalal, S. Kamal, and D. Kim, A depth video sensor-based life-logging human activity recognition system for elderly care in smart indoor environments, Sensors 14 (2014), no. 7, 11735-11759. https://doi.org/10.3390/s140711735
- D. K. Vishwakarma and K. Singh, Human activity recognition based on spatial distribution of gradients at sublevels of average energy silhouette images, IEEE Trans. Cognitive Development. Syst. 9 (2017), no. 4, 316-327. https://doi.org/10.1109/TCDS.2016.2577044
- W. Li, Z. Zhang, and Z. Liu, Action recognition based on a bag of 3D points, in Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. Workshops (San Francisco, CA, USA), June 2010.
- F. Ofli et al., Berkeley MHAD: A comprehensive multimodal human action database, in Proc. IEEE Workshop Appl. Comput. Vis. (WACV), (Clearwater Beach, FL, USA), Jan. 2013.
- S. Gasparrini et al., Proposal and experimental evaluation of fall detection solution based on wearable and depth data fusion, in ICT Innovations 2015, vol. 399, Springer, Cham, Switzerland, 2016, pp. 99-108.
- A. Shahroudy et al., NTU R GB+D: A large scale dataset for 3D human activity analysis, in Proc. Conf. Comput. Vis. Pattern Recognit. (Las Vegas, NV, USA), June 2016, pp. 1010-1019.
- C. Chen, R. Jafari, and N. Kehtarnavaz, Action recognition from depth sequences using depth motion maps-based local binary patterns, in Proc. IEEE Winter Conf. Appl. Comput. Vis. (Waikoloa, HI, USA), Jan. 2015, pp. 1092-1099.
- H. Rahmani et al., Real time human action recognition using histograms of depth gradients and random decision forests, in Proc. IEEE Winter Conf. Appl. Comput. Vis. (Steamboat Springs, CO, USA), Mar. 2014.
- G. Chen et al., Action recognition using ensemble weighted multi-instance learning, in Proc. IEEE Int. Conf. Robot. Autom. (ICRA), (Hong Kong, China), May 2014, pp. 4520-4525.
- C. Zhuang et al., Markov blanket based sequential data feature selection for human motion recognition, in Proc. IEEE Int. Conf. Robot. Biomim. (ROBIO), (Zhuhai, China), Dec. 2015, pp. 2059-2064.
- C. Chen, R. Jafari, and N. Kehtarnavaz, Improving human action recognition using fusion of depth camera and inertial sensors, IEEE Trans. Hum. Mach. Syst. 45 (2015), no. 1, 51-61. https://doi.org/10.1109/THMS.2014.2362520
- M. Li and H. Leung, Multiview skeletal interaction recognition using active joint interaction graph, IEEE Trans. Multimedia 18 (2016), no. 11, 2293-2302. https://doi.org/10.1109/TMM.2016.2614228
- T. Xu and Y. Zhou, Fall prediction based on biomechanics equilibrium using Kinect, Int. J. Distrib. Sens. Netw. 13 (2017), no. 4, 1-9.
- X. Yang and Y. Tian, EigenJoints-based action recognition using naive Bayes nearest neighbor, in Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. Workshops (Providence, RI, USA), June 2012, pp. 14-19.
- J. Wang et al., Mining actionlet ensemble for action recognition with depth cameras, in Proc. Conf. Comput. Vis. Pattern Recognit. (Providence, RI, USA), June 2012, pp. 1290-1297.
- A. Jalal, S. Kamal, and D. Kim, Depth map-based human activity tracking and recognition using body joints features and Self-Organized Map, in Proc. Int. Conf. Comput., Commun. Netw. Technol. (ICCCNT), (Hefei, China), July 2014, article no. 33044.
- A. Jalal and Y. Kim, Dense depth maps-based human pose tracking and recognition in dynamic scenes using ridge data, in Proc. IEEE Int. Conf. Adv. Video Signal Based Surveillance (AVSS), (Seoul, South Korea), Aug. 2014, pp. 119-124.
- A. Ahmed, A. Jalal, and K. Kim, RGB-D images for object segmentation, localization and recognition in indoor scenes using feature descriptor and hough voting, in Proc. Int. Bhurban Conf. Appl. Sci. Technol. (IBCAST), (Islamabad, Pakistan), Jan. 2020, pp. 6061-6083.
- A. Jalal, S. Kamal, and D. Kim, Depth silhouettes context: A new robust feature for human tracking and activity recognition based on embedded HMMs, in Proc. Int. Conf. Ubiquitous Robot. Ambient Intell. (URAI), (Goyang, South Korea), Oct. 2015.
- A. Jalal et al., Robust human activity recognition from depth video using spatiotemporal multi-fused features, Pattern Recognit. 61 (2017), 295-308. https://doi.org/10.1016/j.patcog.2016.08.003
- S. Badar, A. Jalal, and M. Batool, Wearable sensors for activity analysis using smo-based random forest over smart home and sports datasets, in Proc. Int. Conf. Adv. Comput. Sci. (ICACS), (Lahore, Pakistan), Feb. 2020.
- S. Kamal, A. Jalal, and D. Kim, Depth images-based human detection, tracking and activity recognition using spatiotemporal features and modified HMM, J. Electr. Eng. Technol. 11 (2016), no. 6, 1857-1862. https://doi.org/10.5370/JEET.2016.11.6.1857
- A. Farooq, A. Jalal, and S. Kamal, Dense RGB-D map-based human tracking and activity recognition using skin joints features and self-organizing map, KSII Trans. Internet Infor. Syst. 9 (2015), no. 5, 1856-1869. https://doi.org/10.3837/tiis.2015.05.017
- S. Kamal and A. Jalal, A hybrid feature extraction approach for human detection, tracking and activity recognition using depth sensors, Arab. J. Sci. Eng. 41 (2016), 1043-1051. https://doi.org/10.1007/s13369-015-1955-8
- A. Jalal, M. A. Khan, and K. Kim, A wrist worn acceleration based human motion analysis and classification for ambient smart home system, J. Electr. Eng. Technol. 14 (2019), 1733-1739. https://doi.org/10.1007/s42835-019-00187-w
- MATLAB, R2019b (Version 9.7), The MathWorks Inc., Natick, MA, USA, 2019.
- X. Cai et al., Effective active skeleton representation for low latency human action recognition, IEEE Trans. Multimedia 18 (2016), no. 2, 141-154. https://doi.org/10.1109/TMM.2015.2505089
- H. Xu, Y. Lee, and C. Lee, Activity recognition using Eigen-joints based on HMM, in Proc. Int. Conf. Ubiquitous Robot. Ambient Intell. (URAI), (Goyang, South Korea), Oct. 2015, pp. 300-305.
- A. Jalal, S. Kamal, and D. Kim, Human depth sensors-based activity recognition using spatiotemporal features and hidden Markov model for smart environments, J. Comput. Netw. Commun. 2016 (2016), 1-11.
- P. Foggia et al., Recognizing human actions by a bag of visual words, in Proc. IEEE Int. Conf. Syst., Man, Cybern. (Manchester, UK), Oct. 2013, pp. 2910-2915.
- Y. Guo et al., Multiview cauchy estimator feature embedding for depth and inertial sensor-based human action recognition, IEEE Trans. Syst., Man, Cyber.: Syst. 47 (2017), no. 4, 617-627. https://doi.org/10.1109/TSMC.2016.2617465
- A. Manzi, P. Dario, and F. Cavallo, A human activity recognition system based on dynamic clustering of skeleton data, Sensors 17 (2017), no. 5, 1-14. https://doi.org/10.1109/JSEN.2016.2633501
- S. Hwang et al., Maximizing accuracy of fall detection and alert systems based on 3D convolutional neural network: Poster abstract, in Proc. IEEE/ACM Int. Conf. Internet-of-Things Des. Implementation (IoTDI), (Pittsburgh, PA, USA), Apr. 2017, pp. 343-344.
- J. Liu et al., Spatio-temporal LSTM with trust gates for 3D human action recognition, in Computer Vision-ECCV 2016, vol. 9907, Springer, Cham, Switzerland, 2016.
- J. Liu et al., Skeleton-based human action recognition with global context-aware attention LSTM networks, IEEE Trans. Image Process. 27 (2018), no. 4, 1586-1599. https://doi.org/10.1109/TIP.2017.2785279
- Y. Tang et al., Deep progressive reinforcement learning for skeleton-based action recognition, in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (Salt Lake City, UT, USA), June 2018, pp. 5323-5332.
- C. Li et al., Joint distance maps based action recognition with convolutional neural networks, IEEE Signal Process. 24 (2017), no. 5, 624-628. https://doi.org/10.1109/LSP.2017.2678539
- L. Shi et al., Two-stream adaptive graph convolutional networks for skeleton-based action recognition, in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (Long Beach, CA, USA), June 2019, pp. 12026-12035.
- W. Peng et al., Learning graph convolutional network for skeleton-based human action recognition by neural searching, Proc. AAAI Conf. Artif. Intell. 34 (2020), no. 3, 2669-2676. https://doi.org/10.1609/aaai.v34i03.5652