• Title/Summary/Keyword: Hand Gesture

Search Result 404, Processing Time 0.025 seconds

Real-Time Recognition Method of Counting Fingers for Natural User Interface

  • Lee, Doyeob;Shin, Dongkyoo;Shin, Dongil
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.5
    • /
    • pp.2363-2374
    • /
    • 2016
  • Communication occurs through verbal elements, which usually involve language, as well as non-verbal elements such as facial expressions, eye contact, and gestures. In particular, among these non-verbal elements, gestures are symbolic representations of physical, vocal, and emotional behaviors. This means that gestures can be signals toward a target or expressions of internal psychological processes, rather than simply movements of the body or hands. Moreover, gestures with such properties have been the focus of much research for a new interface in the NUI/NUX field. In this paper, we propose a method for recognizing the number of fingers and detecting the hand region based on the depth information and geometric features of the hand for application to an NUI/NUX. The hand region is detected by using depth information provided by the Kinect system, and the number of fingers is identified by comparing the distance between the contour and the center of the hand region. The contour is detected using the Suzuki85 algorithm, and the number of fingers is calculated by detecting the finger tips in a location at the maximum distance to compare the distances between three consecutive dots in the contour and the center point of the hand. The average recognition rate for the number of fingers is 98.6%, and the execution time is 0.065 ms for the algorithm used in the proposed method. Although this method is fast and its complexity is low, it shows a higher recognition rate and faster recognition speed than other methods. As an application example of the proposed method, this paper explains a Secret Door that recognizes a password by recognizing the number of fingers held up by a user.

HMM-based Intent Recognition System using 3D Image Reconstruction Data (3차원 영상복원 데이터를 이용한 HMM 기반 의도인식 시스템)

  • Ko, Kwang-Enu;Park, Seung-Min;Kim, Jun-Yeup;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.22 no.2
    • /
    • pp.135-140
    • /
    • 2012
  • The mirror neuron system in the cerebrum, which are handled by visual information-based imitative learning. When we observe the observer's range of mirror neuron system, we can assume intention of performance through progress of neural activation as specific range, in include of partially hidden range. It is goal of our paper that imitative learning is applied to 3D vision-based intelligent system. We have experiment as stereo camera-based restoration about acquired 3D image our previous research Using Optical flow, unscented Kalman filter. At this point, 3D input image is sequential continuous image as including of partially hidden range. We used Hidden Markov Model to perform the intention recognition about performance as result of restoration-based hidden range. The dynamic inference function about sequential input data have compatible properties such as hand gesture recognition include of hidden range. In this paper, for proposed intention recognition, we already had a simulation about object outline and feature extraction in the previous research, we generated temporal continuous feature vector about feature extraction and when we apply to Hidden Markov Model, make a result of simulation about hand gesture classification according to intention pattern. We got the result of hand gesture classification as value of posterior probability, and proved the accuracy outstandingness through the result.

Design and Evaluation of a Hand-held Device for Recognizing Mid-air Hand Gestures (공중 손동작 인식을 위한 핸드 헬드형 기기의 설계 및 평가)

  • Seo, Kyeongeun;Cho, Hyeonjoong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.4 no.2
    • /
    • pp.91-96
    • /
    • 2015
  • We propose AirPincher, a handheld pointing device for recognizing delicate mid-air hand gestures to control a remote display. AirPincher is designed to overcome disadvantages of the two kinds of existing hand gesture-aware techniques such as glove-based and vision-based. The glove-based techniques cause cumbersomeness of wearing gloves every time and the vision-based techniques incur performance dependence on distance between a user and a remote display. AirPincher allows a user to hold the device in one hand and to generate several delicate finger gestures. The gestures are captured by several sensors proximately embedded into AirPincher. These features help AirPincher avoid the aforementioned disadvantages of the existing techniques. We experimentally find an efficient size of the virtual input space and evaluate two types of pointing interfaces with AirPincher for a remote display. Our experiments suggest appropriate configurations to use the proposed device.

Gesture Recognition Using Zernike Moments Masked By Duel Ring (이중 링 마스크 저니키 모멘트를 이용한 손동작 인식)

  • Park, Jung-Su;Kim, Tae-Yong
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.10
    • /
    • pp.171-180
    • /
    • 2013
  • Generally, when we apply zernike moments value for matching, we can use those moments value obtained from projecting image information under circumscribed circle to zernike basis function. However, the problem is that the power of discrimination can be reduced because hand images include lots of overlapped information due to its special characteristic. On the other hand, when distinguishing hand poses, information in specific area of image information except for overlapped information can increase the power of discrimination. In this paper, in order to solve problems like those, we design R3 ring mask by combining image obtained from R2 ring mask, which can weight information of the power of discrimination and image obtained from R1 ring mask, which eliminate the overlapped information. The moments which are obtained by R3 ring mask decrease operational time by reducing dimension through principle component analysis. In order to confirm the superiority of the suggested method, we conducted some experiments by comparing our method to other method using seven different hand poses.

Implementation of Real-time Recognition System for Continuous Korean Sign Language(KSL) mixed with Korean Manual Alphabet(KMA) (지문자를 포함한 연속된 한글 수화의 실시간 인식 시스템 구현)

  • Lee, Chan-Su;Kim, Jong-Sung;Park, Gyu-Tae;Jang, Won;Bien, Zeung-Nam
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.35C no.6
    • /
    • pp.76-87
    • /
    • 1998
  • This paper deals with a system which recognizes dynmic hand gestures, Korean Sign Language(KSL), mixed with static hand gesture, Korean Manual Alphabet(KMA), continuously. Recognition of continuous hand gestures is very difficult for lack of explicit tokens indicating beginning and ending of signs and for complexity of each gesture. In this paper, state automata is used for segmenting sequential signs into individual ones, and basic elements of KSL and KMA, which consist of 14 hand directions, 23 hand postures and 14 hand orientations are used for recognition of complex gestures under consideration of expandability. Using a pair of CyberGlove and Polhemus sensor, this system recognizes 131 Korean signs and 31 KMA's in real-time with recognition rate 94.3% for KSL excluding no recognition case and 96.7% for KMA.

  • PDF

MPEG-U-based Advanced User Interaction Interface Using Hand Posture Recognition

  • Han, Gukhee;Choi, Haechul
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.5 no.4
    • /
    • pp.267-273
    • /
    • 2016
  • Hand posture recognition is an important technique to enable a natural and familiar interface in the human-computer interaction (HCI) field. This paper introduces a hand posture recognition method using a depth camera. Moreover, the hand posture recognition method is incorporated with the Moving Picture Experts Group Rich Media User Interface (MPEG-U) Advanced User Interaction (AUI) Interface (MPEG-U part 2), which can provide a natural interface on a variety of devices. The proposed method initially detects positions and lengths of all fingers opened, and then recognizes the hand posture from the pose of one or two hands, as well as the number of fingers folded when a user presents a gesture representing a pattern in the AUI data format specified in MPEG-U part 2. The AUI interface represents a user's hand posture in the compliant MPEG-U schema structure. Experimental results demonstrate the performance of the hand posture recognition system and verified that the AUI interface is compatible with the MPEG-U standard.

A Finger Counting Method for Gesture Recognition (제스처 인식을 위한 손가락 개수 인식 방법)

  • Lee, DoYeob;Shin, DongKyoo;Shin, DongIl
    • Journal of Internet Computing and Services
    • /
    • v.17 no.2
    • /
    • pp.29-37
    • /
    • 2016
  • Humans develop and maintain relationship through communication. Communication is largely divided into verbal communication and non-verbal communication. Verbal communication involves the use of a language or characters, while non-verbal communication utilizes body language. We use gestures with language together in conversations of everyday life. Gestures belong to non-verbal communication, and can be offered using a variety of shapes and movements to deliver an opinion. For this reason, gestures are spotlighted as a means of implementing an NUI/NUX in the fields of HCI and HRI. In this paper, using Kinect and the geometric features of the hand, we propose a method for recognizing the number of fingers and detecting the hand area. A Kinect depth image can be used to detect the hand region, with the finger number identified by comparing the distance of outline and the central point of a hand. Average recognition rate for recognizing the number of fingers is 98.5%, from the proposed method, The proposed method would help enhancing the functionality of the human computer interaction by increasing the expression range of gestures.

Hand Region Segmentation and Tracking Based on Hue Image (Hue 영상을 기반한 손 영역 검출 및 추적)

  • 권화중;이준호
    • Proceedings of the IEEK Conference
    • /
    • 1999.11a
    • /
    • pp.1003-1006
    • /
    • 1999
  • Hand segmentation and tracking is essential to the development of a hand gesture recognition system. This research features segementation and tracking of hand regions based the hue component of color. We propose a method that employs HSI color model, and segments and tracks hand regions using the hue component of color alone. In order to track the segmented hand regions, we only apply Kalman filter to a region of interest represented by a rectangle region. Initial experimental results show that the system accurately segments and tracks hand regions although it only uses the hue compoent of color. The system yields near real time throghput of 8 frames per second on a Pentium II 233MHz PC.

  • PDF

SVM-Based EEG Signal for Hand Gesture Classification (서포트 벡터 머신 기반 손동작 뇌전도 구분에 대한 연구)

  • Hong, Seok-min;Min, Chang-gi;Oh, Ha-Ryoung;Seong, Yeong-Rak;Park, Jun-Seok
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.29 no.7
    • /
    • pp.508-514
    • /
    • 2018
  • An electroencephalogram (EEG) evaluates the electrical activity generated by brain cell interactions that occur during brain activity, and an EEG can evaluate the brain activity caused by hand movement. In this study, a 16-channel EEG was used to measure the EEG generated before and after hand movement. The measured data can be classified as a supervised learning model, a support vector machine (SVM). To shorten the learning time of the SVM, a feature extraction and vector dimension reduction by filtering is proposed that minimizes motion-related information loss and compresses EEG information. The classification results showed an average of 72.7% accuracy between the sitting position and the hand movement at the electrodes of the frontal lobe.

An Application of AdaBoost Learning Algorithm and Kalman Filter to Hand Detection and Tracking (AdaBoost 학습 알고리즘과 칼만 필터를 이용한 손 영역 탐지 및 추적)

  • Kim, Byeong-Man;Kim, Jun-Woo;Lee, Kwang-Ho
    • Journal of the Korea Society of Computer and Information
    • /
    • v.10 no.4 s.36
    • /
    • pp.47-56
    • /
    • 2005
  • With the development of wearable(ubiquitous) computers, those traditional interfaces between human and computers gradually become uncomfortable to use, which directly leads to a requirement for new one. In this paper, we study on a new interface in which computers try to recognize the gesture of human through a digital camera. Because the method of recognizing hand gesture through camera is affected by the surrounding environment such as lighting and so on, the detector should be a little sensitive. Recently, Viola's detector shows a favorable result in face detection. where Adaboost learning algorithm is used with the Haar features from the integral image. We apply this method to hand area detection and carry out comparative experiments with the classic method using skin color. Experimental results show Viola's detector is more robust than the detection method using skin color in the environment that degradation may occur by surroundings like effect of lighting.

  • PDF