• Title/Summary/Keyword: Gesture Classification

Search Result 65, Processing Time 0.035 seconds

Application of Sensor Network Using Multivariate Gaussian Function to Hand Gesture Recognition (Multivariate Gaussian 함수를 이용한 센서 네트워크의 수화 인식에의 적용)

  • Kim Sung-Ho;Han Yun-Jong;Bogdana Diaconescu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.11 no.12
    • /
    • pp.991-995
    • /
    • 2005
  • Sensor networks are the results of convergence of very important technologies such as wireless communication and micro electromechanical systems. In recent years, sensor networks found a wide applicability in various fields such as health, environment and habitat monitoring, military, etc. A very important step for these many applications is pattern classification and recognition of data collected by sensors installed or deployed in different ways. But, pattern classification and recognition are sometimes difficult to perform. Systematic approach to pattern classification based on modern teaming techniques like Multivariate Gaussian mixture models, can greatly simplify the process of developing and implementing real-time classification models. This paper proposes a new recognition system which is hierarchically composed of many sensor nodes haying the capability of simple processing and wireless communication. The proposed system is able to perform classification of sensed data using the Multivariate Gaussian function. In order to verify the usefulness of the proposed system, it was applied to hand gesture recognition system.

Number Recognition Using Accelerometer of Smartphone (스마트폰 가속도 센서를 이용한 숫자인식)

  • Bae, Seok-Chan;Kang, Bo-Gyung
    • Journal of The Korean Association of Information Education
    • /
    • v.15 no.1
    • /
    • pp.147-154
    • /
    • 2011
  • In this Paper, we suggest the effective pre-correction algorithm on sensor values and the classification algorithm for gesture recognition that use values for each axis of the accelerometer to send data(a number or specific input data) to device. we know that creation of reliable preprocessed data in experimental results through the error rate of X-Axis and Y-Axis for pre-correction and post-correction. we can show high recognition rate through recognizer using the normalization and classification algorithm for the preprocessed data.

  • PDF

Action recognition, hand gesture recognition, and emotion recognition using text classification method (Text classification 방법을 사용한 행동 인식, 손동작 인식 및 감정 인식)

  • Kim, Gi-Duk
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2021.01a
    • /
    • pp.213-216
    • /
    • 2021
  • 본 논문에서는 Text Classification에 사용된 딥러닝 모델을 적용하여 행동 인식, 손동작 인식 및 감정 인식 방법을 제안한다. 먼저 라이브러리를 사용하여 영상에서 특징 추출 후 식을 적용하여 특징의 벡터를 저장한다. 이를 Conv1D, Transformer, GRU를 결합한 모델에 학습시킨다. 이 방법을 통해 하나의 딥러닝 모델을 사용하여 다양한 분야에 적용할 수 있다. 제안한 방법을 사용해 SYSU 3D HOI 데이터셋에서 99.66%, eNTERFACE' 05 데이터셋에 대해 99.0%, DHG-14 데이터셋에 대해 95.48%의 클래스 분류 정확도를 얻을 수 있었다.

  • PDF

Dynamic Hand Gesture Recognition Using CNN Model and FMM Neural Networks (CNN 모델과 FMM 신경망을 이용한 동적 수신호 인식 기법)

  • Kim, Ho-Joon
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.2
    • /
    • pp.95-108
    • /
    • 2010
  • In this paper, we present a hybrid neural network model for dynamic hand gesture recognition. The model consists of two modules, feature extraction module and pattern classification module. We first propose a modified CNN(convolutional Neural Network) a pattern recognition model for the feature extraction module. Then we introduce a weighted fuzzy min-max(WFMM) neural network for the pattern classification module. The data representation proposed in this research is a spatiotemporal template which is based on the motion information of the target object. To minimize the influence caused by the spatial and temporal variation of the feature points, we extend the receptive field of the CNN model to a three-dimensional structure. We discuss the learning capability of the WFMM neural networks in which the weight concept is added to represent the frequency factor in training pattern set. The model can overcome the performance degradation which may be caused by the hyperbox contraction process of conventional FMM neural networks. From the experimental results of human action recognition and dynamic hand gesture recognition for remote-control electric home appliances, the validity of the proposed models is discussed.

Hybrid HMM for Transitional Gesture Classification in Thai Sign Language Translation

  • Jaruwanawat, Arunee;Chotikakamthorn, Nopporn;Werapan, Worawit
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1106-1110
    • /
    • 2004
  • A human sign language is generally composed of both static and dynamic gestures. Each gesture is represented by a hand shape, its position, and hand movement (for a dynamic gesture). One of the problems found in automated sign language translation is on segmenting a hand movement that is part of a transitional movement from one hand gesture to another. This transitional gesture conveys no meaning, but serves as a connecting period between two consecutive gestures. Based on the observation that many dynamic gestures as appeared in Thai sign language dictionary are of quasi-periodic nature, a method was developed to differentiate between a (meaningful) dynamic gesture and a transitional movement. However, there are some meaningful dynamic gestures that are of non-periodic nature. Those gestures cannot be distinguished from a transitional movement by using the signal quasi-periodicity. This paper proposes a hybrid method using a combination of the periodicity-based gesture segmentation method with a HMM-based gesture classifier. The HMM classifier is used here to detect dynamic signs of non-periodic nature. Combined with the periodic-based gesture segmentation method, this hybrid scheme can be used to identify segments of a transitional movement. In addition, due to the use of quasi-periodic nature of many dynamic sign gestures, dimensionality of the HMM part of the proposed method is significantly reduced, resulting in computational saving as compared with a standard HMM-based method. Through experiment with real measurement, the proposed method's recognition performance is reported.

  • PDF

A Head Gesture Recognition Method based on Eigenfaces using SOM and PRL (SOM과 PRL을 이용한 고유얼굴 기반의 머리동작 인식방법)

  • Lee, U-Jin;Gu, Ja-Yeong
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.3
    • /
    • pp.971-976
    • /
    • 2000
  • In this paper a new method for head gesture recognition is proposed. A the first stage, face image data are transformed into low dimensional vectors by principal component analysis (PCA), which utilizes the high correlation between face pose images. The a self organization map(SM) is trained by the transformed face vectors, in such a that the nodes at similar locations respond to similar poses. A sequence of poses which comprises each model gesture goes through PCA and SOM, and the result is stored in the database. At the recognition stage any sequence of frames goes through the PCA and SOM, and the result is compared with the model gesture stored in the database. To improve robustness of classification, probabilistic relaxation labeling(PRL) is used, which utilizes the contextural information imbedded in the adjacent poses.

  • PDF

A Study on Hand Gesture Classification Deep learning method device based on RGBD Image (RGBD 이미지 기반 핸드제스처 분류 딥러닝 기법의 연구)

  • Park, Jong-Chan;Li, Yan;Shin, Byeong-Seok
    • Annual Conference of KIPS
    • /
    • 2019.10a
    • /
    • pp.1173-1175
    • /
    • 2019
  • 소음이 심하거나 긴급한 상황 등에서 서로 다른 핸드제스처에 대한 인식을 컴퓨터의 입력으로 받고 이를 특정 명령으로 인식하는 등의 연구가 로봇 분야에서 연구되고 있다. 그러나 핸드제스처에 대한 전처리 과정에서 RGB데이터를 활용하거나 또는 스켈레톤을 활용하는 연구들이 다양하게 연구되었지만, 실생활에서의 노이즈가 많아 분류 정확도가 높지 않거나 컴퓨팅 파워의 사용이 과다한 문제가 발생했다. 본 논문에서는 RGBD 이미지를 사용하여 Hand Gesture를 트레이닝 받은 Keras 모델을 통해 입력받은 Hand Gesture을 분류하는 연구를 진행하였다. Depth Camera를 통하여 입력받은 Hand Gesture Raw-Data를 Image로 재구성하여 딥러닝을 진행하였다.

Analysis of Gesture Features on Character Expression of (캐릭터 성격표현에 의한 제스처 특징 분석 : 영화 <아바타>의 '나비족' 캐릭터를 중심으로)

  • Lee, Young-Sook;Choi, Eun-Jin
    • Cartoon and Animation Studies
    • /
    • s.24
    • /
    • pp.155-172
    • /
    • 2011
  • The purpose of this study is to analyze the gesture features on the personalities of Navi characters in . In order to analyze the personality type of characters, the study applied the classification of Enneagram based on script of . The character features are classified according to character types, then the metaphorical character of the expression is obtained through gesture analysis in . Thus, it is possible to set up characters that fit its personalities in Contents of digital image. Also this study suggests creation of attractive characters and expression methods with gesture based personality.

A study on hand gesture recognition using 3D hand feature (3차원 손 특징을 이용한 손 동작 인식에 관한 연구)

  • Bae Cheol-Soo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.10 no.4
    • /
    • pp.674-679
    • /
    • 2006
  • In this paper a gesture recognition system using 3D feature data is described. The system relies on a novel 3D sensor that generates a dense range mage of the scene. The main novelty of the proposed system, with respect to other 3D gesture recognition techniques, is the capability for robust recognition of complex hand postures such as those encountered in sign language alphabets. This is achieved by explicitly employing 3D hand features. Moreover, the proposed approach does not rely on colour information, and guarantees robust segmentation of the hand under various illumination conditions, and content of the scene. Several novel 3D image analysis algorithms are presented covering the complete processing chain: 3D image acquisition, arm segmentation, hand -forearm segmentation, hand pose estimation, 3D feature extraction, and gesture classification. The proposed system is tested in an application scenario involving the recognition of sign-language postures.

Gesture Spotting by Web-Camera in Arbitrary Two Positions and Fuzzy Garbage Model (임의 두 지점의 웹 카메라와 퍼지 가비지 모델을 이용한 사용자의 의미 있는 동작 검출)

  • Yang, Seung-Eun
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.1 no.2
    • /
    • pp.127-136
    • /
    • 2012
  • Many research of hand gesture recognition based on vision system have been conducted which enable user operate various electronic devices more easily. 3D position calculation and meaningful gesture classification from similar gestures should be executed to recognize hand gesture accurately. A simple and cost effective method of 3D position calculation and gesture spotting (a task to recognize meaningful gesture from other similar meaningless gestures) is described in this paper. 3D position is achieved by calculation of two cameras relative position through pan/tilt module and a marker regardless with the placed position. Fuzzy garbage model is proposed to provide a variable reference value to decide whether the user gesture is the command gesture or not. The reference is achieved from fuzzy command gesture model and fuzzy garbage model which returns the score that shows the degree of belonging to command gesture and garbage gesture respectively. Two-stage user adaptation is proposed that off-line (batch) adaptation for inter-personal difference and on-line (incremental) adaptation for intra-difference to enhance the performance. Experiment is conducted for 5 different users. The recognition rate of command (discriminate command gesture) is more than 95% when only one command like meaningless gesture exists and more than 85% when the command is mixed with many other similar gestures.