• Title/Summary/Keyword: Hand gesture recognition

Search Result 311, Processing Time 0.026 seconds

Implementation of User Gesture Recognition System for manipulating a Floating Hologram Character (플로팅 홀로그램 캐릭터 조작을 위한 사용자 제스처 인식 시스템 구현)

  • Jang, Myeong-Soo;Lee, Woo-Beom
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.19 no.2
    • /
    • pp.143-149
    • /
    • 2019
  • Floating holograms are technologies that provide rich 3D stereoscopic images in a wide space such as advertisement, concert. In addition, It is possible to reduce the 3D glasses inconvenience, eye strain, and space distortion, and to enjoy 3D images with excellent realism and existence. Therefore, this paper implements a user gesture recognition system for manipulating a floating hologram characters that can be used in a small space devices. The proposed method detects face region using haar feature-based cascade classifier, and recognizes the user gestures using a user gesture-occurred position information that is acquired from the gesture difference image in real time. And Each classified gesture information is mapped to the character motion in floating hologram for manipulating a character action. In order to evaluate the performance of the proposed user gesture recognition system for manipulating a floating hologram character, we make the floating hologram display devise, and measures the recognition rate of each gesture repeatedly that includes body shaking, walking, hand shaking, and jumping. As a results, the average recognition rate was 88%.

Training-Free sEMG Pattern Recognition Algorithm: A Case Study of A Patient with Partial-Hand Amputation (무학습 근전도 패턴 인식 알고리즘: 부분 수부 절단 환자 사례 연구)

  • Park, Seongsik;Lee, Hyun-Joo;Chung, Wan Kyun;Kim, Keehoon
    • The Journal of Korea Robotics Society
    • /
    • v.14 no.3
    • /
    • pp.211-220
    • /
    • 2019
  • Surface electromyogram (sEMG), which is a bio-electrical signal originated from action potentials of nerves and muscle fibers activated by motor neurons, has been widely used for recognizing motion intention of robotic prosthesis for amputees because it enables a device to be operated intuitively by users without any artificial and additional work. In this paper, we propose a training-free unsupervised sEMG pattern recognition algorithm. It is useful for the gesture recognition for the amputees from whom we cannot achieve motion labels for the previous supervised pattern recognition algorithms. Using the proposed algorithm, we can classify the sEMG signals for gesture recognition and the calculated threshold probability value can be used as a sensitivity parameter for pattern registration. The proposed algorithm was verified by a case study of a patient with partial-hand amputation.

Action recognition, hand gesture recognition, and emotion recognition using text classification method (Text classification 방법을 사용한 행동 인식, 손동작 인식 및 감정 인식)

  • Kim, Gi-Duk
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2021.01a
    • /
    • pp.213-216
    • /
    • 2021
  • 본 논문에서는 Text Classification에 사용된 딥러닝 모델을 적용하여 행동 인식, 손동작 인식 및 감정 인식 방법을 제안한다. 먼저 라이브러리를 사용하여 영상에서 특징 추출 후 식을 적용하여 특징의 벡터를 저장한다. 이를 Conv1D, Transformer, GRU를 결합한 모델에 학습시킨다. 이 방법을 통해 하나의 딥러닝 모델을 사용하여 다양한 분야에 적용할 수 있다. 제안한 방법을 사용해 SYSU 3D HOI 데이터셋에서 99.66%, eNTERFACE' 05 데이터셋에 대해 99.0%, DHG-14 데이터셋에 대해 95.48%의 클래스 분류 정확도를 얻을 수 있었다.

  • PDF

Dynamic Gesture Recognition for the Remote Camera Robot Control (원격 카메라 로봇 제어를 위한 동적 제스처 인식)

  • Lee Ju-Won;Lee Byung-Ro
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.8 no.7
    • /
    • pp.1480-1487
    • /
    • 2004
  • This study is proposed the novel gesture recognition method for the remote camera robot control. To recognize the dynamics gesture, the preprocessing step is the image segmentation. The conventional methods for the effectively object segmentation has need a lot of the cole. information about the object(hand) image. And these methods in the recognition step have need a lot of the features with the each object. To improve the problems of the conventional methods, this study proposed the novel method to recognize the dynamic hand gesture such as the MMS(Max-Min Search) method to segment the object image, MSM(Mean Space Mapping) method and COG(Conte. Of Gravity) method to extract the features of image, and the structure of recognition MLPNN(Multi Layer Perceptron Neural Network) to recognize the dynamic gestures. In the results of experiment, the recognition rate of the proposed method appeared more than 90[%], and this result is shown that is available by HCI(Human Computer Interface) device for .emote robot control.

Hand Gesture Recognition from Kinect Sensor Data (키넥트 센서 데이터를 이용한 손 제스처 인식)

  • Cho, Sun-Young;Byun, Hye-Ran;Lee, Hee-Kyung;Cha, Ji-Hun
    • Journal of Broadcast Engineering
    • /
    • v.17 no.3
    • /
    • pp.447-458
    • /
    • 2012
  • We present a method to recognize hand gestures using skeletal joint data obtained from Microsoft's Kinect sensor. We propose a combination feature of multi-angle histograms robust to orientation variations to represent the observation sequence of skeletons. The proposed feature efficiently represents the orientation variations of gestures that can be occurred according to person or environment by combining the multiple angle histograms with various angular-quantization levels. The gesture represented as combination of multi-angle histograms and random decision forest classifier improve the recognition performance. We conduct the experiments in hand gesture dataset obtained from a kinect sensor and show that our method outperforms the other methods by comparing the recognition performance.

A Self Visual-Acuity Testing System based on the Hand-Gesture Recognition by the KS Standard Optotype (KS 표준 시표를 어용한 손-동작 인식 기반의 자가 시력 측정 시스템)

  • Choi, Chang-Yur;Lee, Woo-Beom
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.11 no.4
    • /
    • pp.303-309
    • /
    • 2010
  • We proposes a new approach for testing the self visual-acuity by using the KS standard optotype. The proposed system provides their hand-gesture recognition method for the convenient response of subjects in the visual acuity measurement. Also, this system can measure a visual-acuity that excludes the examiner's subjective judgement or the subject's memorized guess, because of presenting a random optotype automatically by computer without a examiner. Especially, Our system guarantees the reliability by using the KS standard optotype and its presentation(KS P ISO 8596), which is defined by the Korea Standards Association in 2006. And the database management function of our system can provide the visual-acuity data to the EMR client easily. As a result, Our system shows the 98% consistency in the limit of the ${\pm}1$ visual-acuity level error by comparing the visual-acuity chart test.

A Prototype Design for a Real-time VR Game with Hand Tracking Using Affordance Elements

  • Yu-Won Jeong
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.5
    • /
    • pp.47-53
    • /
    • 2024
  • In this paper, we propose applying interactive technology in virtual environments to enhance interaction and immersion by inducing more natural movements in the gesture recognition process through the concept of affordance. A technique is proposed to recognize gestures most similar to actual hand movements by applying a line segment recognition algorithm, incorporating sampling and normalization processes in the gesture recognition process. This line segment recognition was applied to the drawing of magic circles in the <VR Spell> game implemented in this paper. The experimental method verified the recognition rates for four line segment recognition actions. This paper aims to propose a VR game that pursues greater immersion and fun for the user through real-time hand tracking technology using affordance Elements, applied to immersive content in virtual environments such as VR games.

Hand Feature Extraction Algorithm Using Curvature Analysis For Recognition of Various Hand Gestures (다양한 손 제스처 인식을 위한 곡률 분석 기반의 손 특징 추출 알고리즘)

  • Yoon, Hong-Chan;Cho, Jin-Soo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.20 no.5
    • /
    • pp.13-20
    • /
    • 2015
  • In this paper, we propose an algorithm that can recognize not only the number of stretched fingers but also determination of attached fingers for extracting features required for hand gesture recognition. The proposed algorithm detects the hand area in the input image by the skin color range filter based on a color model and labeling, and then recognizes various hand gestures by extracting the number of stretched fingers and determination of attached fingers using curvature information extracted from outlines and feature points. Experiment results show that the recognition rate and the frame rate are similar to those of the conventional algorithm, but the number of gesture cases that can be defined by the extracted characteristics is about four times higher than the conventional algorithm, so that the proposed algorithm can recognize more various gestures.

Gesture-Based Emotion Recognition by 3D-CNN and LSTM with Keyframes Selection

  • Ly, Son Thai;Lee, Guee-Sang;Kim, Soo-Hyung;Yang, Hyung-Jeong
    • International Journal of Contents
    • /
    • v.15 no.4
    • /
    • pp.59-64
    • /
    • 2019
  • In recent years, emotion recognition has been an interesting and challenging topic. Compared to facial expressions and speech modality, gesture-based emotion recognition has not received much attention with only a few efforts using traditional hand-crafted methods. These approaches require major computational costs and do not offer many opportunities for improvement as most of the science community is conducting their research based on the deep learning technique. In this paper, we propose an end-to-end deep learning approach for classifying emotions based on bodily gestures. In particular, the informative keyframes are first extracted from raw videos as input for the 3D-CNN deep network. The 3D-CNN exploits the short-term spatiotemporal information of gesture features from selected keyframes, and the convolutional LSTM networks learn the long-term feature from the features results of 3D-CNN. The experimental results on the FABO dataset exceed most of the traditional methods results and achieve state-of-the-art results for the deep learning-based technique for gesture-based emotion recognition.

A method for image-based shadow interaction with virtual objects

  • Ha, Hyunwoo;Ko, Kwanghee
    • Journal of Computational Design and Engineering
    • /
    • v.2 no.1
    • /
    • pp.26-37
    • /
    • 2015
  • A lot of researchers have been investigating interactive portable projection systems such as a mini-projector. In addition, in exhibition halls and museums, there is a trend toward using interactive projection systems to make viewing more exciting and impressive. They can also be applied in the field of art, for example, in creating shadow plays. The key idea of the interactive portable projection systems is to recognize the user's gesture in real-time. In this paper, a vision-based shadow gesture recognition method is proposed for interactive projection systems. The gesture recognition method is based on the screen image obtained by a single web camera. The method separates only the shadow area by combining the binary image with an input image using a learning algorithm that isolates the background from the input image. The region of interest is recognized with labeling the shadow of separated regions, and then hand shadows are isolated using the defect, convex hull, and moment of each region. To distinguish hand gestures, Hu's invariant moment method is used. An optical flow algorithm is used for tracking the fingertip. Using this method, a few interactive applications are developed, which are presented in this paper.