• Title/Summary/Keyword: Hand Model

Search Result 3,106, Processing Time 0.032 seconds

Development of a Hand~posture Recognition System Using 3D Hand Model (3차원 손 모델을 이용한 비전 기반 손 모양 인식기의 개발)

  • Jang, Hyo-Young;Bien, Zeung-Nam
    • Proceedings of the KIEE Conference
    • /
    • 2007.04a
    • /
    • pp.219-221
    • /
    • 2007
  • Recent changes to ubiquitous computing requires more natural human-computer(HCI) interfaces that provide high information accessibility. Hand-gesture, i.e., gestures performed by one 'or two hands, is emerging as a viable technology to complement or replace conventional HCI technology. This paper deals with hand-posture recognition. Hand-posture database construction is important in hand-posture recognition. Human hand is composed of 27 bones and the movement of each joint is modeled by 23 degrees of freedom. Even for the same hand-posture,. grabbed images may differ depending on user's characteristic and relative position between the hand and cameras. To solve the difficulty in defining hand-postures and construct database effective in size, we present a method using a 3D hand model. Hand joint angles for each hand-posture and corresponding silhouette images from many viewpoints by projecting the model into image planes are used to construct the ?database. The proposed method does not require additional equations to define movement constraints of each joint. Also using the method, it is easy to get images of one hand-posture from many vi.ewpoints and distances. Hence it is possible to construct database more precisely and concretely. The validity of the method is evaluated by applying it to the hand-posture recognition system.

  • PDF

An Improved Approach for 3D Hand Pose Estimation Based on a Single Depth Image and Haar Random Forest

  • Kim, Wonggi;Chun, Junchul
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.8
    • /
    • pp.3136-3150
    • /
    • 2015
  • A vision-based 3D tracking of articulated human hand is one of the major issues in the applications of human computer interactions and understanding the control of robot hand. This paper presents an improved approach for tracking and recovering the 3D position and orientation of a human hand using the Kinect sensor. The basic idea of the proposed method is to solve an optimization problem that minimizes the discrepancy in 3D shape between an actual hand observed by Kinect and a hypothesized 3D hand model. Since each of the 3D hand pose has 23 degrees of freedom, the hand articulation tracking needs computational excessive burden in minimizing the 3D shape discrepancy between an observed hand and a 3D hand model. For this, we first created a 3D hand model which represents the hand with 17 different parts. Secondly, Random Forest classifier was trained on the synthetic depth images generated by animating the developed 3D hand model, which was then used for Haar-like feature-based classification rather than performing per-pixel classification. Classification results were used for estimating the joint positions for the hand skeleton. Through the experiment, we were able to prove that the proposed method showed improvement rates in hand part recognition and a performance of 20-30 fps. The results confirmed its practical use in classifying hand area and successfully tracked and recovered the 3D hand pose in a real time fashion.

Hierarchical Hand Pose Model for Hand Expression Recognition (손 표현 인식을 위한 계층적 손 자세 모델)

  • Heo, Gyeongyong;Song, Bok Deuk;Kim, Ji-Hong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.10
    • /
    • pp.1323-1329
    • /
    • 2021
  • For hand expression recognition, hand pose recognition based on the static shape of the hand and hand gesture recognition based on the dynamic hand movement are used together. In this paper, we propose a hierarchical hand pose model based on finger position and shape for hand expression recognition. For hand pose recognition, a finger model representing the finger state and a hand pose model using the finger state are hierarchically constructed, which is based on the open source MediaPipe. The finger model is also hierarchically constructed using the bending of one finger and the touch of two fingers. The proposed model can be used for various applications of transmitting information through hands, and its usefulness was verified by applying it to number recognition in sign language. The proposed model is expected to have various applications in the user interface of computers other than sign language recognition.

Revised Computational-GOMS Model for Drag Activity

  • Lee, Yong-Ho;Jeon, Young-Joo;Myung, Ro-Hae
    • Journal of the Ergonomics Society of Korea
    • /
    • v.30 no.2
    • /
    • pp.365-373
    • /
    • 2011
  • The existing GOMS model overestimates the performance time of mouse activities because it describes them in a serial sequence. However, parallel movements of eye and hand(eye-hand coordination) have been dominant in mouse activities and this eye-hand coordination is the main factor for the overestimation of performance time. In this study, therefore, the revised CGOMSL model was developed to implement eye-hand coordination to the mouse activity to overcome one of the limitations of GOMS model, the lack of capability for parallel processing. The suggested revised CGOMSL model for drag activity, as an example for one of mouse activities in this study, begins visual search processing before a hand movement but ends the visual search processing with the hand movement in the same time. The results show that the revised CGOMSL model made the prediction of human performance more accurately than the existing GOMS model. In other words, one of the limitations of GOMS model, the incapability of parallel processing, could be overcome with the revised CGOMSL model so that the performance time should be more accurately predicted.

Feature Point Extraction of Hand Region Using Vision (비젼을 이용한 손 영역 특징 점 추출)

  • Jeong, Hyun-Suk;Joo, Young-Hoon
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.58 no.10
    • /
    • pp.2041-2046
    • /
    • 2009
  • In this paper, we propose the feature points extraction method of hand region using vision. To do this, first, we find the HCbCr color model by using HSI and YCbCr color model. Second, we extract the hand region by using the HCbCr color model and the fuzzy color filter. Third, we extract the exact hand region by applying labeling algorithm to extracted hand region. Fourth, after finding the center of gravity of extracted hand region, we obtain the first feature points by using Canny edge, chain code, and DP method. And then, we obtain the feature points of hand region by applying the convex hull method to the extracted first feature points. Finally, we demonstrate the effectiveness and feasibility of the proposed method through some experiments.

The Structural Model of Hand Hygiene Behavior for the Prevention of Healthcare-associated Infection in Hospital Nurses (병원간호사의 의료관련감염 예방을 위한 손위생에 관한 구조모형)

  • Jeong, Sun-Young;Kim, Ok-Soo
    • Korean Journal of Adult Nursing
    • /
    • v.24 no.2
    • /
    • pp.119-129
    • /
    • 2012
  • Purpose: The purpose of this study was to test hand hygiene behavior model of hospital nurses, based on theory of planned behavior. Methods: Data were collected from 253 nurses from four university hospitals for the period of December 2010 to January 2011. Data were analyzed using of SAS (ver.9.1). Fitness of the study model was identified with SAS PROC CALIS. Results: The overall fitness was $x^2$=57.81 (df=13, $p$ <.001), GFI=.99, AGFI=.99, CFI=.95, NFI=.93. The variance of actual implementation of hand hygiene by predictor variables was 11.0% and the variance of intention to hand hygiene was 53.5%. Variable that had a direct effect on hand hygiene behavior was intention. Perceived behavior control and attitude affected hand hygiene behavior indirectly. Control belief had a direct effect on perceived behavior control and had an indirect effect on intention and behavior. Behavioral belief had a direct effect on attitude and an indirect effect on intention and behavior. Conclusion: The study provides basic information for understanding nurses' hand hygiene behavior. Further testing of the model will indicate which variables can contribute to improved hand hygiene.

HSFE Network and Fusion Model based Dynamic Hand Gesture Recognition

  • Tai, Do Nhu;Na, In Seop;Kim, Soo Hyung
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.9
    • /
    • pp.3924-3940
    • /
    • 2020
  • Dynamic hand gesture recognition(d-HGR) plays an important role in human-computer interaction(HCI) system. With the growth of hand-pose estimation as well as 3D depth sensors, depth, and the hand-skeleton dataset is proposed to bring much research in depth and 3D hand skeleton approaches. However, it is still a challenging problem due to the low resolution, higher complexity, and self-occlusion. In this paper, we propose a hand-shape feature extraction(HSFE) network to produce robust hand-shapes. We build a hand-shape model, and hand-skeleton based on LSTM to exploit the temporal information from hand-shape and motion changes. Fusion between two models brings the best accuracy in dynamic hand gesture (DHG) dataset.

Hand Gesture Recognition Using HMM(Hidden Markov Model) (HMM(Hidden Markov Model)을 이용한 핸드 제스처인식)

  • Ha, Jeong-Yo;Lee, Min-Ho;Choi, Hyung-Il
    • Journal of Digital Contents Society
    • /
    • v.10 no.2
    • /
    • pp.291-298
    • /
    • 2009
  • In this paper we proposed a vision based realtime hand gesture recognition method. To extract skin color, we translate RGB color space into YCbCr color space and use CbCr color for the final extraction. To find the center of extracted hand region we apply practical center point extraction algorithm. We use Kalman filter to tracking hand region and use HMM(Hidden Markov Model) algorithm (learning 6 type of hand gesture image) to recognize it. We demonstrated the effectiveness of our algorithm by some experiments.

  • PDF

3-D Hand Motion Recognition Using Data Glove (데이터 글로브를 이용한 3차원 손동작 인식)

  • Kim, Ji-Hwan;Park, Jin-Woo;Thang, Nguyen Duc;Kim, Tae-Seong
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.324-329
    • /
    • 2009
  • Hand Motion Modeling and Recognition (HMR) are a fundamental technology in the field of proactive computing for designing a human computer interaction system. In this paper, we present a 3D HMR system including data glove based on 3-axis accelerometer sensor and 3D Hand Modeling. Data glove as a device is capable of transmitting the motion signal to PC through wireless communication. We have implemented a 3D hand model using kinematic chain theory. We finally utilized the rule based algorithm to recognize hand gestures namely, scissor, rock and papers using the 3-D hand model.

  • PDF

RGB Camera-based Real-time 21 DoF Hand Pose Tracking (RGB 카메라 기반 실시간 21 DoF 손 추적)

  • Choi, Junyeong;Park, Jong-Il
    • Journal of Broadcast Engineering
    • /
    • v.19 no.6
    • /
    • pp.942-956
    • /
    • 2014
  • This paper proposes a real-time hand pose tracking method using a monocular RGB camera. Hand tracking has high ambiguity since a hand has a number of degrees of freedom. Thus, to reduce the ambiguity the proposed method adopts the step-by-step estimation scheme: a palm pose estimation, a finger yaw motion estimation, and a finger pitch motion estimation, which are performed in consecutive order. Assuming a hand to be a plane, the proposed method utilizes a planar hand model, which facilitates a hand model regeneration. The hand model regeneration modifies the hand model to fit a current user's hand, and improves robustness and accuracy of the tracking results. The proposed method can work in real-time and does not require GPU-based processing. Thus, it can be applied to various platforms including mobile devices such as Google Glass. The effectiveness and performance of the proposed method will be verified through various experiments.