• Title/Summary/Keyword: Hand gesture recognition

Search Result 311, Processing Time 0.036 seconds

Robust 3D Hand Tracking based on a Coupled Particle Filter (결합된 파티클 필터에 기반한 강인한 3차원 손 추적)

  • Ahn, Woo-Seok;Suk, Heung-Il;Lee, Seong-Whan
    • Journal of KIISE:Software and Applications
    • /
    • v.37 no.1
    • /
    • pp.80-84
    • /
    • 2010
  • Tracking hands is an essential technique for hand gesture recognition which is an efficient way in Human Computer Interaction (HCI). Recently, many researchers have focused on hands tracking using a 3D hand model and showed robust tracking results compared to using 2D hand models. In this paper, we propose a novel 3D hand tracking method based on a coupled particle filter. This provides robust and fast tracking results by estimating each part of global hand poses and local finger motions separately and then utilizing the estimated results as a prior for each other. Furthermore, in order to improve the robustness, we apply a multi-cue based method by integrating a color-based area matching method and an edge-based distance matching method. In our experiments, the proposed method showed robust tracking results for complex hand motions in a cluttered background.

Region-growing based Hand Segmentation Algorithm using Skin Color and Depth Information (피부색 및 깊이정보를 이용한 영역채움 기반 손 분리 기법)

  • Seo, Jonghoon;Chae, Seungho;Shim, Jinwook;Kim, Hayoung;Han, Tack-Don
    • Journal of Korea Multimedia Society
    • /
    • v.16 no.9
    • /
    • pp.1031-1043
    • /
    • 2013
  • Extracting hand region from images is the first part in the process to recognize hand posture and gesture interaction. Therefore, a good segmenting method is important because it determines the overall performance of hand recognition systems. Conventional hand segmentation researches were prone to changing illumination conditions or limited to the ability to detect multiple people. In this paper, we propose a robust technique based on the fusion of skin-color data and depth information for hand segmentation process. The proposed algorithm uses skin-color data to localize accurate seed location for region-growing from a complicated background. Based on the seed location, our algorithm adjusts each detected blob to fill up the hole region. A region-growing algorithm is applied to the adjusted blob boundary at the detected depth image to obtain a robust hand region against illumination effects. Also, the resulting hand region is used to train our skin-model adaptively which further reduces the effects of changing illumination. We conducted experiments to compare our results with conventional techniques which validates the robustness of the proposed algorithm and in addition we show our method works well even in a counter light condition.

NUI/NUX of the Virtual Monitor Concept using the Concentration Indicator and the User's Physical Features (사용자의 신체적 특징과 뇌파 집중 지수를 이용한 가상 모니터 개념의 NUI/NUX)

  • Jeon, Chang-hyun;Ahn, So-young;Shin, Dong-il;Shin, Dong-kyoo
    • Journal of Internet Computing and Services
    • /
    • v.16 no.6
    • /
    • pp.11-21
    • /
    • 2015
  • As growing interest in Human-Computer Interaction(HCI), research on HCI has been actively conducted. Also with that, research on Natural User Interface/Natural User eXperience(NUI/NUX) that uses user's gesture and voice has been actively conducted. In case of NUI/NUX, it needs recognition algorithm such as gesture recognition or voice recognition. However these recognition algorithms have weakness because their implementation is complex and a lot of time are needed in training because they have to go through steps including preprocessing, normalization, feature extraction. Recently, Kinect is launched by Microsoft as NUI/NUX development tool which attracts people's attention, and studies using Kinect has been conducted. The authors of this paper implemented hand-mouse interface with outstanding intuitiveness using the physical features of a user in a previous study. However, there are weaknesses such as unnatural movement of mouse and low accuracy of mouse functions. In this study, we designed and implemented a hand mouse interface which introduce a new concept called 'Virtual monitor' extracting user's physical features through Kinect in real-time. Virtual monitor means virtual space that can be controlled by hand mouse. It is possible that the coordinate on virtual monitor is accurately mapped onto the coordinate on real monitor. Hand-mouse interface based on virtual monitor concept maintains outstanding intuitiveness that is strength of the previous study and enhance accuracy of mouse functions. Further, we increased accuracy of the interface by recognizing user's unnecessary actions using his concentration indicator from his encephalogram(EEG) data. In order to evaluate intuitiveness and accuracy of the interface, we experimented it for 50 people from 10s to 50s. As the result of intuitiveness experiment, 84% of subjects learned how to use it within 1 minute. Also, as the result of accuracy experiment, accuracy of mouse functions (drag(80.4%), click(80%), double-click(76.7%)) is shown. The intuitiveness and accuracy of the proposed hand-mouse interface is checked through experiment, this is expected to be a good example of the interface for controlling the system by hand in the future.

Automatic Coarticulation Detection for Continuous Sign Language Recognition (연속된 수화 인식을 위한 자동화된 Coarticulation 검출)

  • Yang, Hee-Deok;Lee, Seong-Whan
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.1
    • /
    • pp.82-91
    • /
    • 2009
  • Sign language spotting is the task of detecting and recognizing the signs in a signed utterance. The difficulty of sign language spotting is that the occurrences of signs vary in both motion and shape. Moreover, the signs appear within a continuous gesture stream, interspersed with transitional movements between signs in a vocabulary and non-sign patterns(which include out-of-vocabulary signs, epentheses, and other movements that do not correspond to signs). In this paper, a novel method for designing a threshold model in a conditional random field(CRF) model is proposed. The proposed model performs an adaptive threshold for distinguishing between signs in the vocabulary and non-sign patterns. A hand appearance-based sign verification method, a short-sign detector, and a subsign reasoning method are included to further improve sign language spotting accuracy. Experimental results show that the proposed method can detect signs from continuous data with an 88% spotting rate and can recognize signs from isolated data with a 94% recognition rate, versus 74% and 90% respectively for CRFs without a threshold model, short-sign detector, subsign reasoning, and hand appearance-based sign verification.

Finger Counting Algorithm in the Hand with Stuck Fingers (붙어 있는 손가락을 가진 손에서 손가락 개수 알고리즘)

  • Oh, Jeong-su
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.10
    • /
    • pp.1892-1897
    • /
    • 2017
  • This paper proposes a finger counting algorithm for a hand with stuck fingers. The proposed algorithm is based on the fact that straight line type shadows are inevitably generated between fingers. It divides the hand region into the thumb region and the four fingers region for effective shadow detection, and generates an edge image in each region. Projection curves are generated by appling a line detection and a projection technique to each edge image, and the peaks of the curves are detected as candidates for finger shadows. And then peaks due to finger shadows are extracted from them and counted. In the finger counting experiment on hand images expressing various shapes with stuck fingers, the counting success rate is from 83.3% to 100% according to the number of fingers, and 93.1% on the whole. It also shows that if hand images are generated under controlled conditions, the failure cases can be sufficiently improved.

Developing Interactive Game Contents using 3D Human Pose Recognition (3차원 인체 포즈 인식을 이용한 상호작용 게임 콘텐츠 개발)

  • Choi, Yoon-Ji;Park, Jae-Wan;Song, Dae-Hyeon;Lee, Chil-Woo
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.12
    • /
    • pp.619-628
    • /
    • 2011
  • Normally vision-based 3D human pose recognition technology is used to method for convey human gesture in HCI(Human-Computer Interaction). 2D pose model based recognition method recognizes simple 2D human pose in particular environment. On the other hand, 3D pose model which describes 3D human body skeletal structure can recognize more complex 3D pose than 2D pose model in because it can use joint angle and shape information of body part. In this paper, we describe a development of interactive game contents using pose recognition interface that using 3D human body joint information. Our system was proposed for the purpose that users can control the game contents with body motion without any additional equipment. Poses are recognized comparing current input pose and predefined pose template which is consist of 14 human body joint 3D information. We implement the game contents with the our pose recognition system and make sure about the efficiency of our proposed system. In the future, we will improve the system that can be recognized poses in various environments robustly.

Segmentation of Pointed Objects for Service Robots (서비스 로봇을 위한 지시 물체 분할 방법)

  • Kim, Hyung-O;Kim, Soo-Hwan;Kim, Dong-Hwan;Park, Sung-Kee
    • The Journal of Korea Robotics Society
    • /
    • v.4 no.2
    • /
    • pp.139-146
    • /
    • 2009
  • This paper describes how a person extracts a unknown object with pointing gesture while interacting with a robot. Using a stereo vision sensor, our proposed method consists of two stages: the detection of the operators' face, the estimation of the pointing direction, and the extraction of the pointed object. The operator's face is recognized by using the Haar-like features. And then we estimate the 3D pointing direction from the shoulder-to-hand line. Finally, we segment an unknown object from 3D point clouds in estimated region of interest. On the basis of this proposed method, we implemented an object registration system with our mobile robot and obtained reliable experimental results.

  • PDF

Automatic Photography Shooting using Hand Gesture Recognition (손동작 인식 기능을 이용한 자동 사진 촬영)

  • Han, Min-Su;Kim, Kwang-Baek
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2012.05a
    • /
    • pp.173-175
    • /
    • 2012
  • 본 논문에서는 스마트폰 카메라를 이용하여 실제 사진 촬영에서 많이 사용되는 손동작들을 인식하고 자동으로 사진을 촬영하는 방법을 제안한다. 제안된 방법은 스마트폰 카메라로부터 획득한 영상에서 피부색의 특징이 잘 나타나는 YCbCr 컬러 공간의 스킨 컬러 정보 값을 기반으로 피부 영역을 추출한다. 추출된 피부 영역에서 Labeling 기법을 적용하여 Contour 정보를 분석한 후, 피부 객체를 추출한다. 추출된 피부 객체에서 손가락의 위치 정보를 이용하여 손 영역을 추출한 후에 손동작을 인식하고, 손동작을 인식한 카메라가 자동으로 사진을 촬영한다. 제안된 방법은 저 사양의 환경에서 손동작을 인식하는 속도가 빠르고, 기존 스마트폰 카메라의 타이머 기능보다 효율적으로 사용이 가능한 것을 실험을 통하여 확인하였다.

  • PDF

Dynamic Training Algorithm for Hand Gesture Recognition System (손동작 인식 시스템을 위한 동적 학습 알고리즘)

  • Shim Jae-Rok;Park Ho-Sik;Kim Tae-Woo;Ra Sang-Dong;Bae Cheol-Soo
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2006.05a
    • /
    • pp.701-704
    • /
    • 2006
  • 본 논문에서는 카메라-투영 시스템에서 비전에 기반을 둔 손동작 인식을 위한 새로운 알고리즘을 제안하고 있다. 제안된 인식방법은 정적인 손동작 분류를 위하여 푸리에 변환을 사용하였다. 손분할은 개선된 배경 제거 방법을 사용하였다. 대부분의 인식방법들이 같은 피검자에 의해 학습과 실험이 이루어지고 상호작용에 이전에 학습단계가 필요하다. 그러나 학습되지 않은 다양한 상황에 대해서도 상호작용을 위해 동작 인식이 요구된다. 그러므로 본 논문에서는 인식 작업 중에 검출된 불완전한 동작들을 정정하여 적용하였다. 그 결과 사용자와 독립되게 동작을 인식함으로써 새로운 사용자에게 신속하게 온라인 적용이 가능하였다.

  • PDF

Hand Gesture Tracking and Recognition for Video Editing (비디오 편집을 위한 손동작 추적 및 인식)

  • Park Ho-Sik;Cha Seung-Joo;Jung Ha-Young;Ra Sang-Dong;Bae Cheol-Soo
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2006.05a
    • /
    • pp.697-700
    • /
    • 2006
  • 본 논문에서는 동작에 근거한 새로운 비디오 편집 방법을 제안한다. 강의 비디오에서 전자 슬라이드 내용을 자동으로 검출하고 비디오와 동기화한다. 각 동기화된 표제의 동작을 연속적으로 추적 및 인식한 후, 등록된 화면과 슬라이드에서 변환 내용을 찾아 동작이 일어 나는 영역을 확인한다. 인식된 동작과 등록된 지점에서 슬라이드의 정보를 추출하여 슬라이드 영역을 부분적으로 확대한다거나 원본 비디오를 자동으로 편집함으로써 비디오의 질을 향상 시킬 수가 있다. 2 개의 비디오 가지고 실험한 결과 각각 95.5, 96.4%의 동작 인식 결과를 얻을 수 있었다.

  • PDF