• Title/Summary/Keyword: Gesture Recognition.

Search Result 558, Processing Time 0.035 seconds

Smartphone Accelerometer-Based Gesture Recognition and its Robotic Application (스마트폰 가속도 센서 기반의 제스처 인식과 로봇 응용)

  • Nam, Sang-Ha;Kim, Joo-Hee;Heo, Se-Kyeong;Kim, In-Cheol
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.6
    • /
    • pp.395-402
    • /
    • 2013
  • We propose an accelerometer-based gesture recognition method for smartphone users. In our method, similarities between a new time series accelerometer data and each gesture exemplar are computed with DTW algorithm, and then the best matching gesture is determined based on k-NN algorithm. In order to investigate the performance of our method, we implemented a gesture recognition program working on an Android smartphone and a gesture-based teleoperating robot system. Through a set of user-mixed and user-independent experiments, we showed that the proposed method and implementation have high performance and scalability.

Gesture Recognition and Motion Evaluation Using Appearance Information of Pose in Parametric Gesture Space (파라메트릭 제스처 공간에서 포즈의 외관 정보를 이용한 제스처 인식과 동작 평가)

  • Lee, Chil-Woo;Lee, Yong-Jae
    • Journal of Korea Multimedia Society
    • /
    • v.7 no.8
    • /
    • pp.1035-1045
    • /
    • 2004
  • In this paper, we describe a method that can recognize gestures and evaluate the degree of the gestures from sequential gesture images by using Gesture Feature Space. The previous popular methods based on HMM and neural network have difficulties in recognizing the degree of gesture even though it can classify gesture into some kinds. However, our proposed method can recognize not only posture but also the degree information of the gestures, such as speed and magnitude by calculating distance among the position vectors substituting input and model images in parametric eigenspace. This method which can be applied in various applications such as intelligent interface systems and surveillance systems is a simple and robust recognition algorithm.

  • PDF

The Study on Gesture Recognition for Fighting Games based on Kinect Sensor (키넥트 센서 기반 격투액션 게임을 위한 제스처 인식에 관한 연구)

  • Kim, Jong-Min;Kim, Eun-Young
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2018.10a
    • /
    • pp.552-555
    • /
    • 2018
  • This study developed a gesture recognition method using Kinect sensor and proposed a fighting action control interface. To extract the pattern features of a gesture, it used a method of extracting them in consideration of a body rate based on the shoulders, rather than of absolute positions. Although the same gesture is made, the positional coordinates of each joint caught by Kinect sensor can be different depending on a length and direction of the arm. Therefore, this study applied principal component analysis in order for gesture modeling and analysis. The method helps to reduce the effects of data errors and bring about dimensional contraction effect. In addition, this study proposed a modified matching algorithm to reduce motion restrictions of gesture recognition system.

  • PDF

Study on Real-time Gesture Recognition based on Convolutional Neural Network for Game Applications (게임 어플리케이션을 위한 컨볼루션 신경망 기반의 실시간 제스처 인식 연구)

  • Chae, Ji Hun;Lim, Jong Heon;Kim, Hae Sung;Lee, Joon Jae
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.5
    • /
    • pp.835-843
    • /
    • 2017
  • Humans have often been used gesture to communicate with each other. The communication between computer and person was also not different. To interact with a computer, we command with gesture, keyboard, mouse and extra devices. Especially, the gesture is very useful in many environments such as gaming and VR(Virtual Reality), which requires high specification and rendering time. In this paper, we propose a gesture recognition method based on CNN model to apply to gaming and real-time applications. Deep learning for gesture recognition is processed in a separated server and the preprocessing for data acquisition is done a client PC. The experimental results show that the proposed method is in accuracy higher than the conventional method in game environment.

Avatar Control by using hand gesture recognition (Hand Gesture 인식을 이용한 아바타 제어)

  • 최우영;김소연;송백균
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2004.05b
    • /
    • pp.616-619
    • /
    • 2004
  • As interests Un virtual reality being increased, the importance of HCI(Human computer interaction) field using gesture is also increased. However, in the preceding gesture recognition, the requirement of high-cost peripheral equipments limits users right. In this paper we suggest that through using low cost of USB PC-camera users are allowed to have more flexibly and cost down so that it can be adopted much commonly.

  • PDF

A Implementation and Performance Analysis of Emotion Messenger Based on Dynamic Gesture Recognitions using WebCAM (웹캠을 이용한 동적 제스쳐 인식 기반의 감성 메신저 구현 및 성능 분석)

  • Lee, Won-Joo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.7
    • /
    • pp.75-81
    • /
    • 2010
  • In this paper, we propose an emotion messenger which recognizes face or hand gestures of a user using a WebCAM, converts recognized emotions (joy, anger, grief, happiness) to flash-cones, and transmits them to the counterpart. This messenger consists of face recognition module, hand gesture recognition module, and messenger module. In the face recognition module, it converts each region of the eye and the mouth to a binary image and recognizes wink, kiss, and yawn according to shape change of the eye and the mouth. In hand gesture recognition module, it recognizes gawi-bawi-bo according to the number of fingers it has recognized. In messenger module, it converts wink, kiss, and yawn recognized by the face recognition module and gawi-bawi-bo recognized by the hand gesture recognition module to flash-cones and transmits them to the counterpart. Through simulation, we confirmed that CPU share ratio of the emotion messenger is minimized. Moreover, with respect to recognition ratio, we show that the hand gesture recognition module performs better than the face recognition module.

The Effect of Visual Feedback on One-hand Gesture Performance in Vision-based Gesture Recognition System

  • Kim, Jun-Ho;Lim, Ji-Hyoun;Moon, Sung-Hyun
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.4
    • /
    • pp.551-556
    • /
    • 2012
  • Objective: This study presents the effect of visual feedback on one-hand gesture performance in vision-based gesture recognition system when people use gestures to control a screen device remotely. Backgroud: gesture interaction receives growing attention because it uses advanced sensor technology and it allows users natural interaction using their own body motion. In generating motion, visual feedback has been to considered critical factor affect speed and accuracy. Method: three types of visual feedback(arrow, star, and animation) were selected and 20 gestures were listed. 12 participants perform each 20 gestures while given 3 types of visual feedback in turn. Results: People made longer hand trace and take longer time to make a gesture when they were given arrow shape feedback than star-shape feedback. The animation type feedback was most preferred. Conclusion: The type of visual feedback showed statistically significant effect on the length of hand trace, elapsed time, and speed of motion in performing a gesture. Application: This study could be applied to any device that needs visual feedback for device control. A big feedback generate shorter length of motion trace, less time, faster than smaller one when people performs gestures to control a device. So the big size of visual feedback would be recommended for a situation requiring fast actions. On the other hand, the smaller visual feedback would be recommended for a situation requiring elaborated actions.

A Dynamic Hand Gesture Recognition System Incorporating Orientation-based Linear Extrapolation Predictor and Velocity-assisted Longest Common Subsequence Algorithm

  • Yuan, Min;Yao, Heng;Qin, Chuan;Tian, Ying
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.9
    • /
    • pp.4491-4509
    • /
    • 2017
  • The present paper proposes a novel dynamic system for hand gesture recognition. The approach involved is comprised of three main steps: detection, tracking and recognition. First, the gesture contour captured by a 2D-camera is detected by combining the three-frame difference method and skin-color elliptic boundary model. Then, the trajectory of the hand gesture is extracted via a gesture-tracking algorithm based on an occlusion-direction oriented linear extrapolation predictor, where the gesture coordinate in next frame is predicted by the judgment of current occlusion direction. Finally, to overcome the interference of insignificant trajectory segments, the longest common subsequence (LCS) is employed with the aid of velocity information. Besides, to tackle the subgesture problem, i.e., some gestures may also be a part of others, the most probable gesture category is identified through comparison of the relative LCS length of each gesture, i.e., the proportion between the LCS length and the total length of each template, rather than the length of LCS for each gesture. The gesture dataset for system performance test contains digits ranged from 0 to 9, and experimental results demonstrate the robustness and effectiveness of the proposed approach.

A study on hand gesture recognition using 3D hand feature (3차원 손 특징을 이용한 손 동작 인식에 관한 연구)

  • Bae Cheol-Soo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.10 no.4
    • /
    • pp.674-679
    • /
    • 2006
  • In this paper a gesture recognition system using 3D feature data is described. The system relies on a novel 3D sensor that generates a dense range mage of the scene. The main novelty of the proposed system, with respect to other 3D gesture recognition techniques, is the capability for robust recognition of complex hand postures such as those encountered in sign language alphabets. This is achieved by explicitly employing 3D hand features. Moreover, the proposed approach does not rely on colour information, and guarantees robust segmentation of the hand under various illumination conditions, and content of the scene. Several novel 3D image analysis algorithms are presented covering the complete processing chain: 3D image acquisition, arm segmentation, hand -forearm segmentation, hand pose estimation, 3D feature extraction, and gesture classification. The proposed system is tested in an application scenario involving the recognition of sign-language postures.

Hand gesture recognition for player control

  • Shi, Lan Yan;Kim, Jin-Gyu;Yeom, Dong-Hae;Joo, Young-Hoon
    • Proceedings of the KIEE Conference
    • /
    • 2011.07a
    • /
    • pp.1908-1909
    • /
    • 2011
  • Hand gesture recognition has been widely used in virtual reality and HCI (Human-Computer-Interaction) system, which is challenging and interesting subject in the vision based area. The existing approaches for vision-driven interactive user interfaces resort to technologies such as head tracking, face and facial expression recognition, eye tracking and gesture recognition. The purpose of this paper is to combine the finite state machine (FSM) and the gesture recognition method, in other to control Windows Media Player, such as: play/pause, next, pervious, and volume up/down.

  • PDF