• Title/Summary/Keyword: Gesture Recognition Systems

Search Result 126, Processing Time 0.027 seconds

A Study on the VR Payment System using Hand Gesture Recognition (손 제스쳐 인식을 활용한 VR 결제 시스템 연구)

  • Kim, Kyoung Hwan;Lee, Won Hyung
    • Journal of the Korean Society for Computer Game
    • /
    • v.31 no.4
    • /
    • pp.129-135
    • /
    • 2018
  • Electronic signatures, QR codes, and bar codes are used in payment systems used in real life. Research has begun on the payment system implemented in the VR environment. This paper proposes a VR electronic sign system that uses hand gesture recognition to implement an existing payment system in a VR environment. In a VR system, you can not hit the keyboard or touch the mouse. There can be several ways to configure a payment system with a VR controller. Electronic signage using hand gesture recognition is one of them, and hand gesture recognition can be classified by the Warping Methods, Statistical Methods, and Template Matching methods. In this paper, the payment system was configured in VR using the $p algorithm belonging to the Template Matching method. To create a VR environment, we implemented a paypal system where actual payment is made using Unity3D and Vive equipment.

Application of Sensor Network Using Multivariate Gaussian Function to Hand Gesture Recognition (Multivariate Gaussian 함수를 이용한 센서 네트워크의 수화 인식에의 적용)

  • Kim Sung-Ho;Han Yun-Jong;Bogdana Diaconescu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.11 no.12
    • /
    • pp.991-995
    • /
    • 2005
  • Sensor networks are the results of convergence of very important technologies such as wireless communication and micro electromechanical systems. In recent years, sensor networks found a wide applicability in various fields such as health, environment and habitat monitoring, military, etc. A very important step for these many applications is pattern classification and recognition of data collected by sensors installed or deployed in different ways. But, pattern classification and recognition are sometimes difficult to perform. Systematic approach to pattern classification based on modern teaming techniques like Multivariate Gaussian mixture models, can greatly simplify the process of developing and implementing real-time classification models. This paper proposes a new recognition system which is hierarchically composed of many sensor nodes haying the capability of simple processing and wireless communication. The proposed system is able to perform classification of sensed data using the Multivariate Gaussian function. In order to verify the usefulness of the proposed system, it was applied to hand gesture recognition system.

Vision-based hand Gesture Detection and Tracking System (비전 기반의 손동작 검출 및 추적 시스템)

  • Park Ho-Sik;Bae Cheol-soo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.12C
    • /
    • pp.1175-1180
    • /
    • 2005
  • We present a vision-based hand gesture detection and tracking system. Most conventional hand gesture recognition systems utilize a simpler method for hand detection such as background subtractions with assumed static observation conditions and those methods are not robust against camera motions, illumination changes, and so on. Therefore, we propose a statistical method to recognize and detect hand regions in images using geometrical structures. Also, Our hand tracking system employs multiple cameras to reduce occlusion problems and non-synchronous multiple observations enhance system scalability. In this experiment, the proposed method has recognition rate of $99.28\%$ that shows more improved $3.91\%$ than the conventional appearance method.

On-line Korean Sing Language(KSL) Recognition using Fuzzy Min-Max Neural Network and feature Analysis

  • zeungnam Bien;Kim, Jong-Sung
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1995.10b
    • /
    • pp.85-91
    • /
    • 1995
  • This paper presents a system which recognizes the Korean Sign Language(KSL) and translates into normal Korean speech. A sign language is a method of communication for the deaf-mute who uses gestures, especially both hands and fingers. Since the human hands and fingers are not the same in physical dimension, the same form of a gesture produced by two signers with their hands may not produce the same numerical values when obtained through electronic sensors. In this paper, we propose a dynamic gesture recognition method based on feature analysis for efficient classification of hand motions, and on a fuzzy min-max neural network for on-line pattern recognition.

  • PDF

Presentation Control System using Gesture Recognition and Sensor (제스처 인식과 센서를 이용한 프레젠테이션 제어 시스템)

  • Chang, Moon-Soo;Kwak, Sun-Dong;Kang, Sun-Mee
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.4
    • /
    • pp.481-486
    • /
    • 2011
  • Recently, most presentations have been presented on the screen using the computer. This paper suggests that the computer can be controlled by the gesturing recognition, without the help of any person or tools. If we use only information in the form of images, we should have a high-resolution camera for capturing the images and a computer that can treat high-resolution images. However, this paper will present a solution whereby a low-resolution camera can be used at the stage. It uses the supersonic sensor to trace the presenter's location and a low-resolution camera for capturing the necessary limited and small area. The gesture is defined by the number of fingers and one's hand positions which are recognized by the Erosion / Dilation and Subtraction algorithm. The system this paper addresses has improved 13%, when comparing tests between the image-only data system and this paper's system. The gesture recognition tests have a 98% success rate.

Hybrid HMM for Transitional Gesture Classification in Thai Sign Language Translation

  • Jaruwanawat, Arunee;Chotikakamthorn, Nopporn;Werapan, Worawit
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1106-1110
    • /
    • 2004
  • A human sign language is generally composed of both static and dynamic gestures. Each gesture is represented by a hand shape, its position, and hand movement (for a dynamic gesture). One of the problems found in automated sign language translation is on segmenting a hand movement that is part of a transitional movement from one hand gesture to another. This transitional gesture conveys no meaning, but serves as a connecting period between two consecutive gestures. Based on the observation that many dynamic gestures as appeared in Thai sign language dictionary are of quasi-periodic nature, a method was developed to differentiate between a (meaningful) dynamic gesture and a transitional movement. However, there are some meaningful dynamic gestures that are of non-periodic nature. Those gestures cannot be distinguished from a transitional movement by using the signal quasi-periodicity. This paper proposes a hybrid method using a combination of the periodicity-based gesture segmentation method with a HMM-based gesture classifier. The HMM classifier is used here to detect dynamic signs of non-periodic nature. Combined with the periodic-based gesture segmentation method, this hybrid scheme can be used to identify segments of a transitional movement. In addition, due to the use of quasi-periodic nature of many dynamic sign gestures, dimensionality of the HMM part of the proposed method is significantly reduced, resulting in computational saving as compared with a standard HMM-based method. Through experiment with real measurement, the proposed method's recognition performance is reported.

  • PDF

Emotion Recognition Method for Driver Services

  • Kim, Ho-Duck;Sim, Kwee-Bo
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.7 no.4
    • /
    • pp.256-261
    • /
    • 2007
  • Electroencephalographic(EEG) is used to record activities of human brain in the area of psychology for many years. As technology developed, neural basis of functional areas of emotion processing is revealed gradually. So we measure fundamental areas of human brain that controls emotion of human by using EEG. Hands gestures such as shaking and head gesture such as nodding are often used as human body languages for communication with each other, and their recognition is important that it is a useful communication medium between human and computers. Research methods about gesture recognition are used of computer vision. Many researchers study Emotion Recognition method which uses one of EEG signals and Gestures in the existing research. In this paper, we use together EEG signals and Gestures for Emotion Recognition of human. And we select the driver emotion as a specific target. The experimental result shows that using of both EEG signals and gestures gets high recognition rates better than using EEG signals or gestures. Both EEG signals and gestures use Interactive Feature Selection(IFS) for the feature selection whose method is based on the reinforcement learning.

Robot User Control System using Hand Gesture Recognizer (수신호 인식기를 이용한 로봇 사용자 제어 시스템)

  • Shon, Su-Won;Beh, Joung-Hoon;Yang, Cheol-Jong;Wang, Han;Ko, Han-Seok
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.4
    • /
    • pp.368-374
    • /
    • 2011
  • This paper proposes a robot control human interface using Markov model (HMM) based hand signal recognizer. The command receiving humanoid robot sends webcam images to a client computer. The client computer then extracts the intended commanding hum n's hand motion descriptors. Upon the feature acquisition, the hand signal recognizer carries out the recognition procedure. The recognition result is then sent back to the robot for responsive actions. The system performance is evaluated by measuring the recognition of '48 hand signal set' which is created randomly using fundamental hand motion set. For isolated motion recognition, '48 hand signal set' shows 97.07% recognition rate while the 'baseline hand signal set' shows 92.4%. This result validates the proposed hand signal recognizer is indeed highly discernable. For the '48 hand signal set' connected motions, it shows 97.37% recognition rate. The relevant experiments demonstrate that the proposed system is promising for real world human-robot interface application.

Dynamic Hand Gesture Recognition Using CNN Model and FMM Neural Networks (CNN 모델과 FMM 신경망을 이용한 동적 수신호 인식 기법)

  • Kim, Ho-Joon
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.2
    • /
    • pp.95-108
    • /
    • 2010
  • In this paper, we present a hybrid neural network model for dynamic hand gesture recognition. The model consists of two modules, feature extraction module and pattern classification module. We first propose a modified CNN(convolutional Neural Network) a pattern recognition model for the feature extraction module. Then we introduce a weighted fuzzy min-max(WFMM) neural network for the pattern classification module. The data representation proposed in this research is a spatiotemporal template which is based on the motion information of the target object. To minimize the influence caused by the spatial and temporal variation of the feature points, we extend the receptive field of the CNN model to a three-dimensional structure. We discuss the learning capability of the WFMM neural networks in which the weight concept is added to represent the frequency factor in training pattern set. The model can overcome the performance degradation which may be caused by the hyperbox contraction process of conventional FMM neural networks. From the experimental results of human action recognition and dynamic hand gesture recognition for remote-control electric home appliances, the validity of the proposed models is discussed.

Gesture Interface for Controlling Intelligent Humanoid Robot (지능형 로봇 제어를 위한 제스처 인터페이스)

  • Bae Ki Tae;Kim Man Jin;Lee Chil Woo;Oh Jae Yong
    • Journal of Korea Multimedia Society
    • /
    • v.8 no.10
    • /
    • pp.1337-1346
    • /
    • 2005
  • In this paper, we describe an algorithm which can automatically recognize human gesture for Human-Robot interaction. In early works, many systems for recognizing human gestures work under many restricted conditions. To eliminate these restrictions, we have proposed the method that can represent 3D and 2D gesture information simultaneously, APM. This method is less sensitive to noise or appearance characteristic. First, the feature vectors are extracted using APM. The next step is constructing a gesture space by analyzing the statistical information of training images with PCA. And then, input images are compared to the model and individually symbolized to one portion of the model space. In the last step, the symbolized images are recognized with HMM as one of model gestures. The experimental results indicate that the proposed algorithm is efficient on gesture recognition, and it is very convenient to apply to humanoid robot or intelligent interface systems.

  • PDF