• Title/Summary/Keyword: hand gesture recognition

Search Result 311, Processing Time 0.029 seconds

Hand gesture recognition for player control

  • Shi, Lan Yan;Kim, Jin-Gyu;Yeom, Dong-Hae;Joo, Young-Hoon
    • Proceedings of the KIEE Conference
    • /
    • 2011.07a
    • /
    • pp.1908-1909
    • /
    • 2011
  • Hand gesture recognition has been widely used in virtual reality and HCI (Human-Computer-Interaction) system, which is challenging and interesting subject in the vision based area. The existing approaches for vision-driven interactive user interfaces resort to technologies such as head tracking, face and facial expression recognition, eye tracking and gesture recognition. The purpose of this paper is to combine the finite state machine (FSM) and the gesture recognition method, in other to control Windows Media Player, such as: play/pause, next, pervious, and volume up/down.

  • PDF

Hand gesture based a pet robot control (손 제스처 기반의 애완용 로봇 제어)

  • Park, Se-Hyun;Kim, Tae-Ui;Kwon, Kyung-Su
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.13 no.4
    • /
    • pp.145-154
    • /
    • 2008
  • In this paper, we propose the pet robot control system using hand gesture recognition in image sequences acquired from a camera affixed to the pet robot. The proposed system consists of 4 steps; hand detection, feature extraction, gesture recognition and robot control. The hand region is first detected from the input images using the skin color model in HSI color space and connected component analysis. Next, the hand shape and motion features from the image sequences are extracted. Then we consider the hand shape for classification of meaning gestures. Thereafter the hand gesture is recognized by using HMMs (hidden markov models) which have the input as the quantized symbol sequence by the hand motion. Finally the pet robot is controlled by a order corresponding to the recognized hand gesture. We defined four commands of sit down, stand up, lie flat and shake hands for control of pet robot. And we show that user is able to control of pet robot through proposed system in the experiment.

  • PDF

Three Dimensional Hand Gesture Taxonomy for Commands

  • Choi, Eun-Jung;Lee, Dong-Hun;Chung, Min-K.
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.4
    • /
    • pp.483-492
    • /
    • 2012
  • Objective: The aim of this study is to suggest three-dimensional(3D) hand gesture taxonomy to organize the user's intention of his/her decisions on deriving a certain gesture systematically. Background: With advanced technologies of gesture recognition, various researchers have studied to focus on deriving intuitive gestures for commands from users. In most of the previous studies, the users' reasons for deriving a certain gesture for a command were only used as a reference to group various gestures. Method: A total of eleven studies which categorized gestures accompanied by speech were investigated. Also a case study with thirty participants was conducted to understand gesture-features which derived from the users specifically. Results: Through the literature review, a total of nine gesture-features were extracted. After conducting the case study, the nine gesture-features were narrowed down a total of seven gesture-features. Conclusion: Three-dimensional hand gesture taxonomy including a total of seven gesture-features was developed. Application: Three-dimensional hand gesture taxonomy might be used as a check list to understand the users' reasons.

Vision-based hand Gesture Detection and Tracking System (비전 기반의 손동작 검출 및 추적 시스템)

  • Park Ho-Sik;Bae Cheol-soo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.12C
    • /
    • pp.1175-1180
    • /
    • 2005
  • We present a vision-based hand gesture detection and tracking system. Most conventional hand gesture recognition systems utilize a simpler method for hand detection such as background subtractions with assumed static observation conditions and those methods are not robust against camera motions, illumination changes, and so on. Therefore, we propose a statistical method to recognize and detect hand regions in images using geometrical structures. Also, Our hand tracking system employs multiple cameras to reduce occlusion problems and non-synchronous multiple observations enhance system scalability. In this experiment, the proposed method has recognition rate of $99.28\%$ that shows more improved $3.91\%$ than the conventional appearance method.

Morphological Hand-Gesture Recognition Algorithm (형태론적 손짓 인식 알고리즘)

  • Choi Jong-Ho
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.8 no.8
    • /
    • pp.1725-1731
    • /
    • 2004
  • The use of gestures provides an attractive alternate to cumbersome interface devices for human-computer interaction. This has motivated a very active research area concerned with computer vision-based analysis and interpretation of hand gestures. The most important issues in gesture recognition are the simplification of algorithm and the reduction of processing time. The mathematical morphology based on geometrical set theory is best used to perform the processing. A key idea of proposed algorithm in this paper is to apply morphological shape decomposition. The primitive elements extracted to a hand gesture include in very important information on the directivity of the hand gestures. Based on this characteristic, we proposed the morphological gesture recognition algorithm using feature vectors calculated to lines connecting the center points of a main-primitive element and sub-primitive elements. Through the experiment, we demonstrated the efficiency of proposed algorithm. Coupling natural interactions such as hand gesture with an appropriately designed interface is a valuable and powerful component in the building of TV switch navigating and video contents browsing system.

Human Gesture Recognition Technology Based on User Experience for Multimedia Contents Control (멀티미디어 콘텐츠 제어를 위한 사용자 경험 기반 동작 인식 기술)

  • Kim, Yun-Sik;Park, Sang-Yun;Ok, Soo-Yol;Lee, Suk-Hwan;Lee, Eung-Joo
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.10
    • /
    • pp.1196-1204
    • /
    • 2012
  • In this paper, a series of algorithms are proposed for controlling different kinds of multimedia contents and realizing interact between human and computer by using single input device. Human gesture recognition based on NUI is presented firstly in my paper. Since the image information we get it from camera is not sensitive for further processing, we transform it to YCbCr color space, and then morphological processing algorithm is used to delete unuseful noise. Boundary Energy and depth information is extracted for hand detection. After we receive the image of hand detection, PCA algorithm is used to recognize hand posture, difference image and moment method are used to detect hand centroid and extract trajectory of hand movement. 8 direction codes are defined for quantifying gesture trajectory, so the symbol value will be affirmed. Furthermore, HMM algorithm is used for hand gesture recognition based on the symbol value. According to series of methods we presented, we can control multimedia contents by using human gesture recognition. Through large numbers of experiments, the algorithms we presented have satisfying performance, hand detection rate is up to 94.25%, gesture recognition rate exceed 92.6%, hand posture recognition rate can achieve 85.86%, and face detection rate is up to 89.58%. According to these experiment results, we can control many kinds of multimedia contents on computer effectively, such as video player, MP3, e-book and so on.

A Novel Door Security System using Hand Gesture Recognition (손동작 인식을 이용한 출입 보안 시스템)

  • Cheoi, Kyungjoo;Han, Juchan
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.8
    • /
    • pp.1320-1328
    • /
    • 2016
  • In this paper, we propose a novel security system using hand gesture recognition. Proposed system does not create a password as numbers, but instead, it creates unique yet simple pattern created by user's hand movement. Because of the fact that individuals have different range of hand movement, speed, direction, and size while drawing a pattern with their hands, the system will be able to accurately recognize only the authorized user. To evaluate the performance of our system, various patterns were tested and the test showed a satisfying result.

Design and Implementation of Hand Gesture Recognizer Based on Artificial Neural Network (인공신경망 기반 손동작 인식기의 설계 및 구현)

  • Kim, Minwoo;Jeong, Woojae;Cho, Jaechan;Jung, Yunho
    • Journal of Advanced Navigation Technology
    • /
    • v.22 no.6
    • /
    • pp.675-680
    • /
    • 2018
  • In this paper, we propose a hand gesture recognizer using restricted coulomb energy (RCE) neural network, and present hardware implementation results for real-time learning and recognition. Since RCE-NN has a flexible network architecture and real-time learning process with low complexity, it is suitable for hand recognition applications. The 3D number dataset was created using an FPGA-based test platform and the designed hand gesture recognizer showed 98.8% recognition accuracy for the 3D number dataset. The proposed hand gesture recognizer is implemented in Intel-Altera cyclone IV FPGA and confirmed that it can be implemented with 26,702 logic elements and 258Kbit memory. In addition, real-time learning and recognition verification were performed at an operating frequency of 70MHz.

Hand Gesture Recognition Using HMM(Hidden Markov Model) (HMM(Hidden Markov Model)을 이용한 핸드 제스처인식)

  • Ha, Jeong-Yo;Lee, Min-Ho;Choi, Hyung-Il
    • Journal of Digital Contents Society
    • /
    • v.10 no.2
    • /
    • pp.291-298
    • /
    • 2009
  • In this paper we proposed a vision based realtime hand gesture recognition method. To extract skin color, we translate RGB color space into YCbCr color space and use CbCr color for the final extraction. To find the center of extracted hand region we apply practical center point extraction algorithm. We use Kalman filter to tracking hand region and use HMM(Hidden Markov Model) algorithm (learning 6 type of hand gesture image) to recognize it. We demonstrated the effectiveness of our algorithm by some experiments.

  • PDF

Automatic hand gesture area extraction and recognition technique using FMCW radar based point cloud and LSTM (FMCW 레이다 기반의 포인트 클라우드와 LSTM을 이용한 자동 핸드 제스처 영역 추출 및 인식 기법)

  • Seung-Tak Ra;Seung-Ho Lee
    • Journal of IKEEE
    • /
    • v.27 no.4
    • /
    • pp.486-493
    • /
    • 2023
  • In this paper, we propose an automatic hand gesture area extraction and recognition technique using FMCW radar-based point cloud and LSTM. The proposed technique has the following originality compared to existing methods. First, unlike methods that use 2D images as input vectors such as existing range-dopplers, point cloud input vectors in the form of time series are intuitive input data that can recognize movement over time that occurs in front of the radar in the form of a coordinate system. Second, because the size of the input vector is small, the deep learning model used for recognition can also be designed lightly. The implementation process of the proposed technique is as follows. Using the distance, speed, and angle information measured by the FMCW radar, a point cloud containing x, y, z coordinate format and Doppler velocity information is utilized. For the gesture area, the hand gesture area is automatically extracted by identifying the start and end points of the gesture using the Doppler point obtained through speed information. The point cloud in the form of a time series corresponding to the viewpoint of the extracted gesture area is ultimately used for learning and recognition of the LSTM deep learning model used in this paper. To evaluate the objective reliability of the proposed technique, an experiment calculating MAE with other deep learning models and an experiment calculating recognition rate with existing techniques were performed and compared. As a result of the experiment, the MAE value of the time series point cloud input vector + LSTM deep learning model was calculated to be 0.262 and the recognition rate was 97.5%. The lower the MAE and the higher the recognition rate, the better the results, proving the efficiency of the technique proposed in this paper.