• Title/Summary/Keyword: Motion Gesture Recognition

Search Result 134, Processing Time 0.029 seconds

(A Comparison of Gesture Recognition Performance Based on Feature Spaces of Angle, Velocity and Location in HMM Model) (HMM인식기 상에서 방향, 속도 및 공간 특징량에 따른 제스처 인식 성능 비교)

  • 윤호섭;양현승
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.5_6
    • /
    • pp.430-443
    • /
    • 2003
  • The objective of this paper is to evaluate most useful feature vector space using the angle, velocity and location features from gesture trajectory which extracted hand regions from consecutive input images and track them by connecting their positions. For this purpose, the gesture tracking algorithm using color and motion information is developed. The recognition module is a HMM model to adaptive time various data. The proposed algorithm was applied to a database containing 4,800 alphabetical handwriting gestures of 20 persons who was asked to draw his/her handwriting gestures five times for each of the 48 characters.

A Robust Fingertip Extraction and Extended CAMSHIFT based Hand Gesture Recognition for Natural Human-like Human-Robot Interaction (강인한 손가락 끝 추출과 확장된 CAMSHIFT 알고리즘을 이용한 자연스러운 Human-Robot Interaction을 위한 손동작 인식)

  • Lee, Lae-Kyoung;An, Su-Yong;Oh, Se-Young
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.18 no.4
    • /
    • pp.328-336
    • /
    • 2012
  • In this paper, we propose a robust fingertip extraction and extended Continuously Adaptive Mean Shift (CAMSHIFT) based robust hand gesture recognition for natural human-like HRI (Human-Robot Interaction). Firstly, for efficient and rapid hand detection, the hand candidate regions are segmented by the combination with robust $YC_bC_r$ skin color model and haar-like features based adaboost. Using the extracted hand candidate regions, we estimate the palm region and fingertip position from distance transformation based voting and geometrical feature of hands. From the hand orientation and palm center position, we find the optimal fingertip position and its orientation. Then using extended CAMSHIFT, we reliably track the 2D hand gesture trajectory with extracted fingertip. Finally, we applied the conditional density propagation (CONDENSATION) to recognize the pre-defined temporal motion trajectories. Experimental results show that the proposed algorithm not only rapidly extracts the hand region with accurately extracted fingertip and its angle but also robustly tracks the hand under different illumination, size and rotation conditions. Using these results, we successfully recognize the multiple hand gestures.

A Personalized Hand Gesture Recognition System using Soft Computing Techniques (소프트 컴퓨팅 기법을 이용한 개인화된 손동작 인식 시스템)

  • Jeon, Moon-Jin;Do, Jun-Hyeong;Lee, Sang-Wan;Park, Kwang-Hyun;Bien, Zeung-Nam
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.1
    • /
    • pp.53-59
    • /
    • 2008
  • Recently, vision-based hand gesture recognition techniques have been developed for assisting elderly and disabled people to control home appliances. Frequently occurred problems which lower the hand gesture recognition rate are due to the inter-person variation and intra-person variation. The recognition difficulty caused by inter-person variation can be handled by using user dependent model and model selection technique. And the recognition difficulty caused by intra-person variation can be handled by using fuzzy logic. In this paper, we propose multivariate fuzzy decision tree learning and classification method for a hand motion recognition system for multiple users. When a user starts to use the system, the most appropriate recognition model is selected and used for the user.

Motion Plane Estimation for Real-Time Hand Motion Recognition (실시간 손동작 인식을 위한 동작 평면 추정)

  • Jeong, Seung-Dae;Jang, Kyung-Ho;Jung, Soon-Ki
    • The KIPS Transactions:PartB
    • /
    • v.16B no.5
    • /
    • pp.347-358
    • /
    • 2009
  • In this thesis, we develop a vision based hand motion recognition system using a camera with two rotational motors. Existing systems were implemented using a range camera or multiple cameras and have a limited working area. In contrast, we use an uncalibrated camera and get more wide working area by pan-tilt motion. Given an image sequence provided by the pan-tilt camera, color and pattern information are integrated into a tracking system in order to find the 2D position and direction of the hand. With these pose information, we estimate 3D motion plane on which the gesture motion trajectory from approximately forms. The 3D trajectory of the moving finger tip is projected into the motion plane, so that the resolving power of the linear gesture patterns is enhanced. We have tested the proposed approach in terms of the accuracy of trace angle and the dimension of the working volume.

Multimodal Interface Based on Novel HMI UI/UX for In-Vehicle Infotainment System

  • Kim, Jinwoo;Ryu, Jae Hong;Han, Tae Man
    • ETRI Journal
    • /
    • v.37 no.4
    • /
    • pp.793-803
    • /
    • 2015
  • We propose a novel HMI UI/UX for an in-vehicle infotainment system. Our proposed HMI UI comprises multimodal interfaces that allow a driver to safely and intuitively manipulate an infotainment system while driving. Our analysis of a touchscreen interface-based HMI UI/UX reveals that a driver's use of such an interface while driving can cause the driver to be seriously distracted. Our proposed HMI UI/UX is a novel manipulation mechanism for a vehicle infotainment service. It consists of several interfaces that incorporate a variety of modalities, such as speech recognition, a manipulating device, and hand gesture recognition. In addition, we provide an HMI UI framework designed to be manipulated using a simple method based on four directions and one selection motion. Extensive quantitative and qualitative in-vehicle experiments demonstrate that the proposed HMI UI/UX is an efficient mechanism through which to manipulate an infotainment system while driving.

Combining Object Detection and Hand Gesture Recognition for Automatic Lighting System Control

  • Pham, Giao N.;Nguyen, Phong H.;Kwon, Ki-Ryong
    • Journal of Multimedia Information System
    • /
    • v.6 no.4
    • /
    • pp.329-332
    • /
    • 2019
  • Recently, smart lighting systems are the combination between sensors and lights. These systems turn on/off and adjust the brightness of lights based on the motion of object and the brightness of environment. These systems are often applied in places such as buildings, rooms, garages and parking lot. However, these lighting systems are controlled by lighting sensors, motion sensors based on illumination environment and motion detection. In this paper, we propose an automatic lighting control system using one single camera for buildings, rooms and garages. The proposed system is one integration the results of digital image processing as motion detection, hand gesture detection to control and dim the lighting system. The experimental results showed that the proposed system work very well and could consider to apply for automatic lighting spaces.

An Implementation of Dynamic Gesture Recognizer Based on WPS and Data Glove (WPS와 장갑 장치 기반의 동적 제스처 인식기의 구현)

  • Kim, Jung-Hyun;Roh, Yong-Wan;Hong, Kwang-Seok
    • The KIPS Transactions:PartB
    • /
    • v.13B no.5 s.108
    • /
    • pp.561-568
    • /
    • 2006
  • WPS(Wearable Personal Station) for next generation PC can define as a core terminal of 'Ubiquitous Computing' that include information processing and network function and overcome spatial limitation in acquisition of new information. As a way to acquire significant dynamic gesture data of user from haptic devices, traditional gesture recognizer based on desktop-PC using wire communication module has several restrictions such as conditionality on space, complexity between transmission mediums(cable elements), limitation of motion and incommodiousness on use. Accordingly, in this paper, in order to overcome these problems, we implement hand gesture recognition system using fuzzy algorithm and neural network for Post PC(the embedded-ubiquitous environment using blue-tooth module and WPS). Also, we propose most efficient and reasonable hand gesture recognition interface for Post PC through evaluation and analysis of performance about each gesture recognition system. The proposed gesture recognition system consists of three modules: 1) gesture input module that processes motion of dynamic hand to input data 2) Relational Database Management System(hereafter, RDBMS) module to segment significant gestures from input data and 3) 2 each different recognition modulo: fuzzy max-min and neural network recognition module to recognize significant gesture of continuous / dynamic gestures. Experimental result shows the average recognition rate of 98.8% in fuzzy min-nin module and 96.7% in neural network recognition module about significantly dynamic gestures.

Motion Animation using orthogonal parameters (직교 파라미터 조합을 이용한 모션 애니메이션)

  • 이칠우;진철영;배기태;정민영
    • Proceedings of the IEEK Conference
    • /
    • 2003.07e
    • /
    • pp.2283-2286
    • /
    • 2003
  • This paper has expressed human's motion data into orthogonal parameters in low dimension, and created new motion data through this. We have reconstructed a new model consisting of orthogonal parameters from dividing human body data into three parts - hand, leg, and body to make new motions. Mixing these parts of body from different motions has leaded to new good motion data. It will be possible to use this motion editing not only for Animation Technology, but also for a three dimensional gesture recognition skill.

  • PDF

Dynamic Hand Gesture Recognition Using CNN Model and FMM Neural Networks (CNN 모델과 FMM 신경망을 이용한 동적 수신호 인식 기법)

  • Kim, Ho-Joon
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.2
    • /
    • pp.95-108
    • /
    • 2010
  • In this paper, we present a hybrid neural network model for dynamic hand gesture recognition. The model consists of two modules, feature extraction module and pattern classification module. We first propose a modified CNN(convolutional Neural Network) a pattern recognition model for the feature extraction module. Then we introduce a weighted fuzzy min-max(WFMM) neural network for the pattern classification module. The data representation proposed in this research is a spatiotemporal template which is based on the motion information of the target object. To minimize the influence caused by the spatial and temporal variation of the feature points, we extend the receptive field of the CNN model to a three-dimensional structure. We discuss the learning capability of the WFMM neural networks in which the weight concept is added to represent the frequency factor in training pattern set. The model can overcome the performance degradation which may be caused by the hyperbox contraction process of conventional FMM neural networks. From the experimental results of human action recognition and dynamic hand gesture recognition for remote-control electric home appliances, the validity of the proposed models is discussed.

Motion Control of a Mobile Robot Using Natural Hand Gesture (자연스런 손동작을 이용한 모바일 로봇의 동작제어)

  • Kim, A-Ram;Rhee, Sang-Yong
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.24 no.1
    • /
    • pp.64-70
    • /
    • 2014
  • In this paper, we propose a method that gives motion command to a mobile robot to recognize human being's hand gesture. Former way of the robot-controlling system with the movement of hand used several kinds of pre-arranged gesture, therefore the ordering motion was unnatural. Also it forced people to study the pre-arranged gesture, making it more inconvenient. To solve this problem, there are many researches going on trying to figure out another way to make the machine to recognize the movement of the hand. In this paper, we used third-dimensional camera to obtain the color and depth data, which can be used to search the human hand and recognize its movement based on it. We used HMM method to make the proposed system to perceive the movement, then the observed data transfers to the robot making it to move at the direction where we want it to be.