• Title/Summary/Keyword: Hand Gesture Recognition

Search Result 311, Processing Time 0.025 seconds

A Robust Fingertip Extraction and Extended CAMSHIFT based Hand Gesture Recognition for Natural Human-like Human-Robot Interaction (강인한 손가락 끝 추출과 확장된 CAMSHIFT 알고리즘을 이용한 자연스러운 Human-Robot Interaction을 위한 손동작 인식)

  • Lee, Lae-Kyoung;An, Su-Yong;Oh, Se-Young
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.18 no.4
    • /
    • pp.328-336
    • /
    • 2012
  • In this paper, we propose a robust fingertip extraction and extended Continuously Adaptive Mean Shift (CAMSHIFT) based robust hand gesture recognition for natural human-like HRI (Human-Robot Interaction). Firstly, for efficient and rapid hand detection, the hand candidate regions are segmented by the combination with robust $YC_bC_r$ skin color model and haar-like features based adaboost. Using the extracted hand candidate regions, we estimate the palm region and fingertip position from distance transformation based voting and geometrical feature of hands. From the hand orientation and palm center position, we find the optimal fingertip position and its orientation. Then using extended CAMSHIFT, we reliably track the 2D hand gesture trajectory with extracted fingertip. Finally, we applied the conditional density propagation (CONDENSATION) to recognize the pre-defined temporal motion trajectories. Experimental results show that the proposed algorithm not only rapidly extracts the hand region with accurately extracted fingertip and its angle but also robustly tracks the hand under different illumination, size and rotation conditions. Using these results, we successfully recognize the multiple hand gestures.

A Hierarchical Bayesian Network for Real-Time Continuous Hand Gesture Recognition (연속적인 손 제스처의 실시간 인식을 위한 계층적 베이지안 네트워크)

  • Huh, Sung-Ju;Lee, Seong-Whan
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.12
    • /
    • pp.1028-1033
    • /
    • 2009
  • This paper presents a real-time hand gesture recognition approach for controlling a computer. We define hand gestures as continuous hand postures and their movements for easy expression of various gestures and propose a Two-layered Bayesian Network (TBN) to recognize those gestures. The proposed method can compensate an incorrectly recognized hand posture and its location via the preceding and following information. In order to vertify the usefulness of the proposed method, we implemented a Virtual Mouse interface, the gesture-based interface of a physical mouse device. In experiments, the proposed method showed a recognition rate of 94.8% and 88.1% for a simple and cluttered background, respectively. This outperforms the previous HMM-based method, which had results of 92.4% and 83.3%, respectively, under the same conditions.

CNN-Based Hand Gesture Recognition for Wearable Applications (웨어러블 응용을 위한 CNN 기반 손 제스처 인식)

  • Moon, Hyeon-Chul;Yang, Anna;Kim, Jae-Gon
    • Journal of Broadcast Engineering
    • /
    • v.23 no.2
    • /
    • pp.246-252
    • /
    • 2018
  • Hand gestures are attracting attention as a NUI (Natural User Interface) of wearable devices such as smart glasses. Recently, to support efficient media consumption in IoT (Internet of Things) and wearable environments, the standardization of IoMT (Internet of Media Things) is in the progress in MPEG. In IoMT, it is assumed that hand gesture detection and recognition are performed on a separate device, and thus provides an interoperable interface between these modules. Meanwhile, deep learning based hand gesture recognition techniques have been recently actively studied to improve the recognition performance. In this paper, we propose a method of hand gesture recognition based on CNN (Convolutional Neural Network) for various applications such as media consumption in wearable devices which is one of the use cases of IoMT. The proposed method detects hand contour from stereo images acquisitioned by smart glasses using depth information and color information, constructs data sets to learn CNN, and then recognizes gestures from input hand contour images. Experimental results show that the proposed method achieves the average 95% hand gesture recognition rate.

A Structure and Framework for Sign Language Interaction

  • Kim, Soyoung;Pan, Younghwan
    • Journal of the Ergonomics Society of Korea
    • /
    • v.34 no.5
    • /
    • pp.411-426
    • /
    • 2015
  • Objective: The goal of this thesis is to design the interaction structure and framework of system to recognize sign language. Background: The sign language of meaningful individual gestures is combined to construct a sentence, so it is difficult to interpret and recognize the meaning of hand gesture for system, because of the sequence of continuous gestures. This being so, in order to interpret the meaning of individual gesture correctly, the interaction structure and framework are needed so that they can segment the indication of individual gesture. Method: We analyze 700 sign language words to structuralize the sign language gesture interaction. First of all, we analyze the transformational patterns of the hand gesture. Second, we analyze the movement of the transformational patterns of the hand gesture. Third, we analyze the type of other gestures except hands. Based on this, we design a framework for sign language interaction. Results: We elicited 8 patterns of hand gesture on the basis of the fact on whether the gesture has a change from starting point to ending point. And then, we analyzed the hand movement based on 3 elements: patterns of movement, direction, and whether hand movement is repeating or not. Moreover, we defined 11 movements of other gestures except hands and classified 8 types of interaction. The framework for sign language interaction, which was designed based on this mentioned above, applies to more than 700 individual gestures of the sign language, and can be classified as an individual gesture in spite of situation which has continuous gestures. Conclusion: This study has structuralized in 3 aspects defined to analyze the transformational patterns of the starting point and the ending point of hand shape, hand movement, and other gestures except hands for sign language interaction. Based on this, we designed the framework that can recognize the individual gestures and interpret the meaning more accurately, when meaningful individual gesture is input sequence of continuous gestures. Application: When we develop the system of sign language recognition, we can apply interaction framework to it. Structuralized gesture can be used for using database of sign language, inventing an automatic recognition system, and studying on the action gestures in other areas.

Hand Gesture Recognition Using Shape Similarity Based On Feature Points Of Contour (윤곽선 특징점 기반 형태 유사도를 이용한 손동작 인식)

  • Yi, Hong-Ryoul;Choi, Chang;Kim, Pan-Koo
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2008.05a
    • /
    • pp.585-588
    • /
    • 2008
  • This paper proposes hand gesture recognition using shape similarity method. For this, we require two steps which are aquisition of Hand area and similarity evaluation. First step is extracting hand area using YCbCr color spare. Then eliminate noise through filter and analyzing histogram. For doing this, we ran measure similarity of hand gesture by applying TSR after getting contour. Finally, we utilize shape similarity for recognizing of hand gesture.

  • PDF

Deep Learning Based 3D Gesture Recognition Using Spatio-Temporal Normalization (시 공간 정규화를 통한 딥 러닝 기반의 3D 제스처 인식)

  • Chae, Ji Hun;Gang, Su Myung;Kim, Hae Sung;Lee, Joon Jae
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.5
    • /
    • pp.626-637
    • /
    • 2018
  • Human exchanges information not only through words, but also through body gesture or hand gesture. And they can be used to build effective interfaces in mobile, virtual reality, and augmented reality. The past 2D gesture recognition research had information loss caused by projecting 3D information in 2D. Since the recognition of the gesture in 3D is higher than 2D space in terms of recognition range, the complexity of gesture recognition increases. In this paper, we proposed a real-time gesture recognition deep learning model and application in 3D space using deep learning technique. First, in order to recognize the gesture in the 3D space, the data collection is performed using the unity game engine to construct and acquire data. Second, input vector normalization for learning 3D gesture recognition model is processed based on deep learning. Thirdly, the SELU(Scaled Exponential Linear Unit) function is applied to the neural network's active function for faster learning and better recognition performance. The proposed system is expected to be applicable to various fields such as rehabilitation cares, game applications, and virtual reality.

Vision-based hand gesture recognition system for object manipulation in virtual space (가상 공간에서의 객체 조작을 위한 비전 기반의 손동작 인식 시스템)

  • Park, Ho-Sik;Jung, Ha-Young;Ra, Sang-Dong;Bae, Cheol-Soo
    • Proceedings of the IEEK Conference
    • /
    • 2005.11a
    • /
    • pp.553-556
    • /
    • 2005
  • We present a vision-based hand gesture recognition system for object manipulation in virtual space. Most conventional hand gesture recognition systems utilize a simpler method for hand detection such as background subtractions with assumed static observation conditions and those methods are not robust against camera motions, illumination changes, and so on. Therefore, we propose a statistical method to recognize and detect hand regions in images using geometrical structures. Also, Our hand tracking system employs multiple cameras to reduce occlusion problems and non-synchronous multiple observations enhance system scalability. Experimental results show the effectiveness of our method.

  • PDF

A Dynamic Hand Gesture Recognition System Incorporating Orientation-based Linear Extrapolation Predictor and Velocity-assisted Longest Common Subsequence Algorithm

  • Yuan, Min;Yao, Heng;Qin, Chuan;Tian, Ying
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.9
    • /
    • pp.4491-4509
    • /
    • 2017
  • The present paper proposes a novel dynamic system for hand gesture recognition. The approach involved is comprised of three main steps: detection, tracking and recognition. First, the gesture contour captured by a 2D-camera is detected by combining the three-frame difference method and skin-color elliptic boundary model. Then, the trajectory of the hand gesture is extracted via a gesture-tracking algorithm based on an occlusion-direction oriented linear extrapolation predictor, where the gesture coordinate in next frame is predicted by the judgment of current occlusion direction. Finally, to overcome the interference of insignificant trajectory segments, the longest common subsequence (LCS) is employed with the aid of velocity information. Besides, to tackle the subgesture problem, i.e., some gestures may also be a part of others, the most probable gesture category is identified through comparison of the relative LCS length of each gesture, i.e., the proportion between the LCS length and the total length of each template, rather than the length of LCS for each gesture. The gesture dataset for system performance test contains digits ranged from 0 to 9, and experimental results demonstrate the robustness and effectiveness of the proposed approach.

TextNAS Application to Multivariate Time Series Data and Hand Gesture Recognition (textNAS의 다변수 시계열 데이터로의 적용 및 손동작 인식)

  • Kim, Gi-duk;Kim, Mi-sook;Lee, Hack-man
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.518-520
    • /
    • 2021
  • In this paper, we propose a hand gesture recognition method by modifying the textNAS used for text classification so that it can be applied to multivariate time series data. It can be applied to various fields such as behavior recognition, emotion recognition, and hand gesture recognition through multivariate time series data classification. In addition, it automatically finds a deep learning model suitable for classification through training, thereby reducing the burden on users and obtaining high-performance class classification accuracy. By applying the proposed method to the DHG-14/28 and Shrec'17 datasets, which are hand gesture recognition datasets, it was possible to obtain higher class classification accuracy than the existing models. The classification accuracy was 98.72% and 98.16% for DHG-14/28, and 97.82% and 98.39% for Shrec'17 14 class/28 class.

  • PDF

Hand Expression Recognition for Virtual Blackboard (가상 칠판을 위한 손 표현 인식)

  • Heo, Gyeongyong;Kim, Myungja;Song, Bok Deuk;Shin, Bumjoo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.12
    • /
    • pp.1770-1776
    • /
    • 2021
  • For hand expression recognition, hand pose recognition based on the static shape of the hand and hand gesture recognition based on hand movement are used together. In this paper, we proposed a hand expression recognition method that recognizes symbols based on the trajectory of a hand movement on a virtual blackboard. In order to recognize a sign drawn by hand on a virtual blackboard, not only a method of recognizing a sign from a hand movement, but also hand pose recognition for finding the start and end of data input is also required. In this paper, MediaPipe was used to recognize hand pose, and LSTM(Long Short Term Memory), a type of recurrent neural network, was used to recognize hand gesture from time series data. To verify the effectiveness of the proposed method, it was applied to the recognition of numbers written on a virtual blackboard, and a recognition rate of about 94% was obtained.