DOI QR코드

DOI QR Code

Auto-Detection Algorithm of Gait's Joints According to Gait's Type

보행자 타입에 따른 보행자의 관절 점 자동 추출 알고리즘

  • Kwak, Nae-Joung (Dept. of Information & Communication Eng., Chungbuk National University) ;
  • Song, Teuk-Seob (Div. of Convergence Computer and Media, Mokwon University)
  • Received : 2017.01.04
  • Accepted : 2017.11.09
  • Published : 2018.03.31

Abstract

In this paper, we propose an algorithm to automatically detect gait's joints. The proposed method classifies gait's types into front gait and flank gait so as to automatically detect gait's joints. And then according to classified types, the proposed applies joint extracting algorithm to input images. Firstly, we split input images into foreground image using difference images of Hue and gray-scale image of input and background one and extract gait's object. The proposed method classifies gaits into front gait and flank gait using ratio of Face's width to torso's width. Then classified gait's type, joints are detected 10 at front gait and detected 7~8 at flank gait. The proposed method is applied to the camera's input and the result shows that the proposed method automatically extracts joints.

Keywords

References

  1. W. Gong, X. Zhang, J. Gonzalez, A. Sobral, T. Bouwmans, C. Tu, and E. Zahzah, "Human Pose Estimation from Monocular Images: A Comprehensive Survey," Sensors, Vol. 16, No. 12, pp. 1-39, 2016. https://doi.org/10.1109/JSEN.2016.2616227
  2. D.S. Patil, R.B. Khanderay, and T. Padvi, “Survey on Moving Body Detection in Video Surveillance System,” International Journal of Engineering and Techniques, Vol. 1, No. 3, pp. 123-128, 2015.
  3. K.M. Lee and W.N. Streeet, “Model-Based Detection, Segmentation and Classification Using On-Line Shape Learning,” Machine Vision and Application, Vol. 13, No. 4, pp. 222-333, 2003. https://doi.org/10.1007/s00138-002-0061-6
  4. D.H. Wilson, A.C. Long, and C. Atkeson, "A Context-Aware Recognition Survey for Data Collection Using Ubiquitous Sensors in the Home," Proceeding of CHI 2005: Late Breaking Results, pp. 1865-1868, 2005.
  5. E. Murphy-Chutorian and M. Trivedi, “Head Pose Estimation in Computer Vision: A Survey,” IEEE Transaction on Pattern Anaysis and Machine Intelligence, Vol. 31, No. 4, pp. 607-626, 2009. https://doi.org/10.1109/TPAMI.2008.106
  6. M. Paul, S.M.E. Haque, and S. Chakraborty, “Human Detection in Surveillance Videos and Its Applications-A Review,” EURASIP Journal of Advances in Signal Processing, Vol. 2013, No, 22, pp. 176-185, 2013. https://doi.org/10.1186/1687-6180-2013-176
  7. A. Hill, C.J. Taylor, and A.D. Brett, “A Framework for Automatic Landmark Identification Using a New Method of Nonrigid Correspondence,” IEEE Transaction on Pattern Analysis and Machine Intelligence, Vol. 22, No. 3, pp. 241-251, 2000. https://doi.org/10.1109/34.841756
  8. K. Jujimura, T. Zhu, and V.N. Thow-Hing, "Estimating Pose from Depth Image Stream," Proceeding of IEEE International Conference on Humanoid Robots, pp. 154-160, 2005.
  9. S. Asteriadis, A. Chatzitofis, D. Zarpalas, D.S. Alexiadis, and P. Daras, "Estimating Human Motion from Multiple Kinect Sensors," Proceeding of the 6th International Conference on Computer Vision/Computer Graphics Collaboration Techniques and Applications, No. 3, pp. 1-6, 2013.
  10. P. Viola and M.J. Jones, "Robust Real-Time Face Detection," International Journal of Computer Vision, Vol. 52, No. 2, pp. 137, 2004.
  11. D. Chai and A. Bouzerdoum, "A Bayesian Approach to Skin Color Classification in YCgcr Color Space," Proceeding of IEEE Tencon 2000, pp. 421, 2000.
  12. N.J. Kwak and T.S. Song, "Object-Action and Risk-Situation Recognition Using Moment Change and Object Size's Ratio," Journal of Korea Multimedia Society, Vol. 17, No. 5, pp. 556-565, 2014. https://doi.org/10.9717/kmms.2014.17.5.556