• Title/Summary/Keyword: human skeleton tracking

Search Result 12, Processing Time 0.025 seconds

Feature Extraction Based on Hybrid Skeleton for Human-Robot Interaction (휴먼-로봇 인터액션을 위한 하이브리드 스켈레톤 특징점 추출)

  • Joo, Young-Hoon;So, Jea-Yun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.14 no.2
    • /
    • pp.178-183
    • /
    • 2008
  • Human motion analysis is researched as a new method for human-robot interaction (HRI) because it concerns with the key techniques of HRI such as motion tracking and pose recognition. To analysis human motion, extracting features of human body from sequential images plays an important role. After finding the silhouette of human body from the sequential images obtained by CCD color camera, the skeleton model is frequently used in order to represent the human motion. In this paper, using the silhouette of human body, we propose the feature extraction method based on hybrid skeleton for detecting human motion. Finally, we show the effectiveness and feasibility of the proposed method through some experiments.

User classification and location tracking algorithm using deep learning (딥러닝을 이용한 사용자 구분 및 위치추적 알고리즘)

  • Park, Jung-tak;Lee, Sol;Park, Byung-Seo;Seo, Young-ho
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.78-79
    • /
    • 2022
  • In this paper, we propose a technique for tracking the classification and location of each user through body proportion analysis of the normalized skeletons of multiple users obtained using RGB-D cameras. To this end, each user's 3D skeleton is extracted from the 3D point cloud and body proportion information is stored. After that, the stored body proportion information is compared with the body proportion data output from the entire frame to propose a user classification and location tracking algorithm in the entire image.

  • PDF

Human Gender and Motion Analysis with Ellipsoid and Logistic Regression Method

  • Ansari, Md Israfil;Shim, Jaechang
    • Journal of Multimedia Information System
    • /
    • v.3 no.2
    • /
    • pp.9-12
    • /
    • 2016
  • This paper is concerned with the effective and efficient identification of the gender and motion of humans. Tracking this nonverbal behavior is useful for providing clues about the interaction of different types of people and their exact motion. This system can also be useful for security in different places or for monitoring patients in hospital and many more applications. Here we describe a novel method of determining identity using machine learning with Microsoft Kinect. This method minimizes the fitting or overlapping error between an ellipsoid based skeleton.

Realtime Human Object Segmentation Using Image and Skeleton Characteristics (영상 특성과 스켈레톤 분석을 이용한 실시간 인간 객체 추출)

  • Kim, Minjoon;Lee, Zucheul;Kim, Wonha
    • Journal of Broadcast Engineering
    • /
    • v.21 no.5
    • /
    • pp.782-791
    • /
    • 2016
  • The object segmentation algorithm from the background could be used for object recognition and tracking, and many applications. To segment objects, this paper proposes a method that refer to several initial frames with real-time processing at fixed camera. First we suggest the probability model to segment object and background and we enhance the performance of algorithm analyzing the color consistency and focus characteristic of camera for several initial frames. We compensate the segmentation result by using human skeleton characteristic among extracted objects. Last the proposed method has the applicability for various mobile application as we minimize computing complexity for real-time video processing.

Livestock Theft Detection System Using Skeleton Feature and Color Similarity (골격 특징 및 색상 유사도를 이용한 가축 도난 감지 시스템)

  • Kim, Jun Hyoung;Joo, Yung Hoon
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.67 no.4
    • /
    • pp.586-594
    • /
    • 2018
  • In this paper, we propose a livestock theft detection system through moving object classification and tracking method. To do this, first, we extract moving objects using GMM(Gaussian Mixture Model) and RGB background modeling method. Second, it utilizes a morphology technique to remove shadows and noise, and recognizes moving objects through labeling. Third, the recognized moving objects are classified into human and livestock using skeletal features and color similarity judgment. Fourth, for the classified moving objects, CAM (Continuously Adaptive Meanshift) Shift and Kalman Filter are used to perform tracking and overlapping judgment, and risk is judged to generate a notification. Finally, several experiments demonstrate the feasibility and applicability of the proposed method.

Development of exercise posture training system using deep learning for human posture recognition (인체 자세 인식 딥러닝을 이용한 운동 자세 훈련 시스템 개발)

  • Jang, Jae-Ho;Jee, Jun-Hwan;Kim, Du-Hwan;Choi, Min-Gi;Yun, Tae-Jin
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2020.07a
    • /
    • pp.289-290
    • /
    • 2020
  • 본 논문에서는 오픈 소스인 openpose skeleton tracking 기술을 이용하여 특정 운동 동작을 영상처리 기술과 딥러닝 기술로 인체 자세에 대해서 인지와 상황 판단하여 운동 동작에 대한 인식 결과를 도출할 수 있다. 먼저 입력받은 영상을 전달받아서 딥러닝 인식 시스템를 통해 인식 결과을 추출한 뒤 비교, 분석한 후에 사전 등록된 운동 동작 명칭으로 화면에 표시하여 이용자가 정확한 동작을 취할 수 있도록 지도하는 데 활용할 수 있다. 또한, 이 기술은 행동 인식부터 얼굴 인식, 손동작 인식 등에 다양하게 활용할 수 있다.

  • PDF

An Improved Approach for 3D Hand Pose Estimation Based on a Single Depth Image and Haar Random Forest

  • Kim, Wonggi;Chun, Junchul
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.8
    • /
    • pp.3136-3150
    • /
    • 2015
  • A vision-based 3D tracking of articulated human hand is one of the major issues in the applications of human computer interactions and understanding the control of robot hand. This paper presents an improved approach for tracking and recovering the 3D position and orientation of a human hand using the Kinect sensor. The basic idea of the proposed method is to solve an optimization problem that minimizes the discrepancy in 3D shape between an actual hand observed by Kinect and a hypothesized 3D hand model. Since each of the 3D hand pose has 23 degrees of freedom, the hand articulation tracking needs computational excessive burden in minimizing the 3D shape discrepancy between an observed hand and a 3D hand model. For this, we first created a 3D hand model which represents the hand with 17 different parts. Secondly, Random Forest classifier was trained on the synthetic depth images generated by animating the developed 3D hand model, which was then used for Haar-like feature-based classification rather than performing per-pixel classification. Classification results were used for estimating the joint positions for the hand skeleton. Through the experiment, we were able to prove that the proposed method showed improvement rates in hand part recognition and a performance of 20-30 fps. The results confirmed its practical use in classifying hand area and successfully tracked and recovered the 3D hand pose in a real time fashion.

Motion Capture of the Human Body Using Multiple Depth Sensors

  • Kim, Yejin;Baek, Seongmin;Bae, Byung-Chull
    • ETRI Journal
    • /
    • v.39 no.2
    • /
    • pp.181-190
    • /
    • 2017
  • The movements of the human body are difficult to capture owing to the complexity of the three-dimensional skeleton model and occlusion problems. In this paper, we propose a motion capture system that tracks dynamic human motions in real time. Without using external markers, the proposed system adopts multiple depth sensors (Microsoft Kinect) to overcome the occlusion and body rotation problems. To combine the joint data retrieved from the multiple sensors, our calibration process samples a point cloud from depth images and unifies the coordinate systems in point clouds into a single coordinate system via the iterative closest point method. Using noisy skeletal data from sensors, a posture reconstruction method is introduced to estimate the optimal joint positions for consistent motion generation. Based on the high tracking accuracy of the proposed system, we demonstrate that our system is applicable to various motion-based training programs in dance and Taekwondo.

Interactive lens through smartphones for supporting level-of-detailed views in a public display

  • Kim, Minseok;Lee, Jae Yeol
    • Journal of Computational Design and Engineering
    • /
    • v.2 no.2
    • /
    • pp.73-78
    • /
    • 2015
  • In this paper, we propose a new approach to providing interactive and collaborative lens among multi-users for supporting level-of-detailed views using smartphones in a public display. In order to provide smartphone-based lens capability, the locations of smartphones are effectively detected and tracked using Kinect, which provides RGB data and depth data (RGB-D). In particular, human skeleton information is extracted from the Kinect 3D depth data to calculate the smartphone location more efficiently and correctly with respect to the public display and to support head tracking for easy target selection and adaptive view generation. The suggested interactive and collaborative lens using smartphones not only can explore local spaces of the shared display but also can provide various kinds of activities such as LOD viewing and collaborative interaction. Implementation results are given to show the advantage and effectiveness of the proposed approach.

Real-time Human Pose Estimation using RGB-D images and Deep Learning

  • Rim, Beanbonyka;Sung, Nak-Jun;Ma, Jun;Choi, Yoo-Joo;Hong, Min
    • Journal of Internet Computing and Services
    • /
    • v.21 no.3
    • /
    • pp.113-121
    • /
    • 2020
  • Human Pose Estimation (HPE) which localizes the human body joints becomes a high potential for high-level applications in the field of computer vision. The main challenges of HPE in real-time are occlusion, illumination change and diversity of pose appearance. The single RGB image is fed into HPE framework in order to reduce the computation cost by using depth-independent device such as a common camera, webcam, or phone cam. However, HPE based on the single RGB is not able to solve the above challenges due to inherent characteristics of color or texture. On the other hand, depth information which is fed into HPE framework and detects the human body parts in 3D coordinates can be usefully used to solve the above challenges. However, the depth information-based HPE requires the depth-dependent device which has space constraint and is cost consuming. Especially, the result of depth information-based HPE is less reliable due to the requirement of pose initialization and less stabilization of frame tracking. Therefore, this paper proposes a new method of HPE which is robust in estimating self-occlusion. There are many human parts which can be occluded by other body parts. However, this paper focuses only on head self-occlusion. The new method is a combination of the RGB image-based HPE framework and the depth information-based HPE framework. We evaluated the performance of the proposed method by COCO Object Keypoint Similarity library. By taking an advantage of RGB image-based HPE method and depth information-based HPE method, our HPE method based on RGB-D achieved the mAP of 0.903 and mAR of 0.938. It proved that our method outperforms the RGB-based HPE and the depth-based HPE.