• Title/Summary/Keyword: Hand Gesture

Search Result 404, Processing Time 0.026 seconds

Robot User Control System using Hand Gesture Recognizer (수신호 인식기를 이용한 로봇 사용자 제어 시스템)

  • Shon, Su-Won;Beh, Joung-Hoon;Yang, Cheol-Jong;Wang, Han;Ko, Han-Seok
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.4
    • /
    • pp.368-374
    • /
    • 2011
  • This paper proposes a robot control human interface using Markov model (HMM) based hand signal recognizer. The command receiving humanoid robot sends webcam images to a client computer. The client computer then extracts the intended commanding hum n's hand motion descriptors. Upon the feature acquisition, the hand signal recognizer carries out the recognition procedure. The recognition result is then sent back to the robot for responsive actions. The system performance is evaluated by measuring the recognition of '48 hand signal set' which is created randomly using fundamental hand motion set. For isolated motion recognition, '48 hand signal set' shows 97.07% recognition rate while the 'baseline hand signal set' shows 92.4%. This result validates the proposed hand signal recognizer is indeed highly discernable. For the '48 hand signal set' connected motions, it shows 97.37% recognition rate. The relevant experiments demonstrate that the proposed system is promising for real world human-robot interface application.

Implementation of DID interface using gesture recognition (제스쳐 인식을 이용한 DID 인터페이스 구현)

  • Lee, Sang-Hun;Kim, Dae-Jin;Choi, Hong-Sub
    • Journal of Digital Contents Society
    • /
    • v.13 no.3
    • /
    • pp.343-352
    • /
    • 2012
  • In this paper, we implemented a touchless interface for DID(Digital Information Display) system using gesture recognition technique which includes both hand motion and hand shape recognition. Especially this touchless interface without extra attachments gives user both easier usage and spatial convenience. For hand motion recognition, two hand-motion's parameters such as a slope and a velocity were measured as a direction-based recognition way. And extraction of hand area image utilizing YCbCr color model and several image processing methods were adopted to recognize a hand shape recognition. These recognition methods are combined to generate various commands, such as, next-page, previous-page, screen-up, screen-down and mouse -click in oder to control DID system. Finally, experimental results showed the performance of 93% command recognition rate which is enough to confirm the possible application to commercial products.

Emotional Human Body Recognition by Using Extraction of Human Body from Image (인간의 움직임 추출을 이용한 감정적인 행동 인식 시스템 개발)

  • Song, Min-Kook;Joo, Young-Hoon;Park, Jin-Bae
    • Proceedings of the KIEE Conference
    • /
    • 2006.10c
    • /
    • pp.214-216
    • /
    • 2006
  • Expressive face and human body gestures are among the main non-verbal communication channels in human-human interaction. Understanding human emotions through body gesture is one of the necessary skills both for humans and also for the computers to interact with their human counterparts. Gesture analysis is consisted of several processes such as detecting of hand, extracting feature, and recognizing emotions. Skin color information for tracking hand gesture is obtained from face detection region. We have revealed relationships between paricular body movements and specific emotions by using HMM(Hidden Markov Model) classifier. Performance evaluation of emotional human body recognition has experimented.

  • PDF

Hand gesture recognition based on RGB image data (RGB 영상 데이터 기반 손동작 인식)

  • Kim, Gi-Duk
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2021.07a
    • /
    • pp.15-16
    • /
    • 2021
  • 본 논문에서는 RGB 영상 데이터를 입력으로 하여 mediapipe의 손 포즈 추정 알고리즘을 적용해 손가락 관절 및 주요 부위의 위치를 얻고 이를 기반으로 딥러닝 모델에 학습 후 손동작 인식 방법을 제안한다. 연속된 프레임에서 한 손의 손가락 주요 부위 간 좌표를 얻고 차분 벡터의 x, y좌표를 저장한 후 Conv1D, Bidirectional GRU, Transformer를 결합한 딥러닝 모델에 학습 후 손동작 인식 분류를 하였다. IC4You Gesture Dataset 의 한 손 동적 데이터 9개 클래스에 적용한 결과 99.63%의 손동작 인식 정확도를 얻었다.

  • PDF

Hand Gesture Recognition from Kinect Sensor Data (키넥트 센서 데이터를 이용한 손 제스처 인식)

  • Cho, Sun-Young;Byun, Hye-Ran;Lee, Hee-Kyung;Cha, Ji-Hun
    • Journal of Broadcast Engineering
    • /
    • v.17 no.3
    • /
    • pp.447-458
    • /
    • 2012
  • We present a method to recognize hand gestures using skeletal joint data obtained from Microsoft's Kinect sensor. We propose a combination feature of multi-angle histograms robust to orientation variations to represent the observation sequence of skeletons. The proposed feature efficiently represents the orientation variations of gestures that can be occurred according to person or environment by combining the multiple angle histograms with various angular-quantization levels. The gesture represented as combination of multi-angle histograms and random decision forest classifier improve the recognition performance. We conduct the experiments in hand gesture dataset obtained from a kinect sensor and show that our method outperforms the other methods by comparing the recognition performance.

Dynamic Hand Gesture Recognition Using CNN Model and FMM Neural Networks (CNN 모델과 FMM 신경망을 이용한 동적 수신호 인식 기법)

  • Kim, Ho-Joon
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.2
    • /
    • pp.95-108
    • /
    • 2010
  • In this paper, we present a hybrid neural network model for dynamic hand gesture recognition. The model consists of two modules, feature extraction module and pattern classification module. We first propose a modified CNN(convolutional Neural Network) a pattern recognition model for the feature extraction module. Then we introduce a weighted fuzzy min-max(WFMM) neural network for the pattern classification module. The data representation proposed in this research is a spatiotemporal template which is based on the motion information of the target object. To minimize the influence caused by the spatial and temporal variation of the feature points, we extend the receptive field of the CNN model to a three-dimensional structure. We discuss the learning capability of the WFMM neural networks in which the weight concept is added to represent the frequency factor in training pattern set. The model can overcome the performance degradation which may be caused by the hyperbox contraction process of conventional FMM neural networks. From the experimental results of human action recognition and dynamic hand gesture recognition for remote-control electric home appliances, the validity of the proposed models is discussed.

A Self Visual-Acuity Testing System based on the Hand-Gesture Recognition by the KS Standard Optotype (KS 표준 시표를 어용한 손-동작 인식 기반의 자가 시력 측정 시스템)

  • Choi, Chang-Yur;Lee, Woo-Beom
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.11 no.4
    • /
    • pp.303-309
    • /
    • 2010
  • We proposes a new approach for testing the self visual-acuity by using the KS standard optotype. The proposed system provides their hand-gesture recognition method for the convenient response of subjects in the visual acuity measurement. Also, this system can measure a visual-acuity that excludes the examiner's subjective judgement or the subject's memorized guess, because of presenting a random optotype automatically by computer without a examiner. Especially, Our system guarantees the reliability by using the KS standard optotype and its presentation(KS P ISO 8596), which is defined by the Korea Standards Association in 2006. And the database management function of our system can provide the visual-acuity data to the EMR client easily. As a result, Our system shows the 98% consistency in the limit of the ${\pm}1$ visual-acuity level error by comparing the visual-acuity chart test.

Volume Control using Gesture Recognition System

  • Shreyansh Gupta;Samyak Barnwal
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.6
    • /
    • pp.161-170
    • /
    • 2024
  • With the technological advances, the humans have made so much progress in the ease of living and now incorporating the use of sight, motion, sound, speech etc. for various application and software controls. In this paper, we have explored the project in which gestures plays a very significant role in the project. The topic of gesture control which has been researched a lot and is just getting evolved every day. We see the usage of computer vision in this project. The main objective that we achieved in this project is controlling the computer settings with hand gestures using computer vision. In this project we are creating a module which acts a volume controlling program in which we use hand gestures to control the computer system volume. We have included the use of OpenCV. This module is used in the implementation of hand gestures in computer controls. The module in execution uses the web camera of the computer to record the images or videos and then processes them to find the needed information and then based on the input, performs the action on the volume settings if that computer. The program has the functionality of increasing and decreasing the volume of the computer. The setup needed for the program execution is a web camera to record the input images and videos which will be given by the user. The program will perform gesture recognition with the help of OpenCV and python and its libraries and them it will recognize or identify the specified human gestures and use them to perform or carry out the changes in the device setting. The objective is to adjust the volume of a computer device without the need for physical interaction using a mouse or keyboard. OpenCV, a widely utilized tool for image processing and computer vision applications in this domain, enjoys extensive popularity. The OpenCV community consists of over 47,000 individuals, and as of a survey conducted in 2020, the estimated number of downloads exceeds 18 million.

A Hand Gesture Recognition System using 3D Tracking Volume Restriction Technique (3차원 추적영역 제한 기법을 이용한 손 동작 인식 시스템)

  • Kim, Kyung-Ho;Jung, Da-Un;Lee, Seok-Han;Choi, Jong-Soo
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.6
    • /
    • pp.201-211
    • /
    • 2013
  • In this paper, we propose a hand tracking and gesture recognition system. Our system employs a depth capture device to obtain 3D geometric information of user's bare hand. In particular, we build a flexible tracking volume and restrict the hand tracking area, so that we can avoid diverse problems caused by conventional object detection/tracking systems. The proposed system computes running average of the hand position, and tracking volume is actively adjusted according to the statistical information that is computed on the basis of uncertainty of the user's hand motion in the 3D space. Once the position of user's hand is obtained, then the system attempts to detect stretched fingers to recognize finger gesture of the user's hand. In order to test the proposed framework, we built a NUI system using the proposed technique, and verified that our system presents very stable performance even in the case that multiple objects exist simultaneously in the crowded environment, as well as in the situation that the scene is occluded temporarily. We also verified that our system ensures running speed of 24-30 frames per second throughout the experiments.

Hand Language Translation Using Kinect

  • Pyo, Junghwan;Kang, Namhyuk;Bang, Jiwon;Jeong, Yongjin
    • Journal of IKEEE
    • /
    • v.18 no.2
    • /
    • pp.291-297
    • /
    • 2014
  • Since hand gesture recognition was realized thanks to improved image processing algorithms, sign language translation has been a critical issue for the hearing-impaired. In this paper, we extract human hand figures from a real time image stream and detect gestures in order to figure out which kind of hand language it means. We used depth-color calibrated image from the Kinect to extract human hands and made a decision tree in order to recognize the hand gesture. The decision tree contains information such as number of fingers, contours, and the hand's position inside a uniform sized image. We succeeded in recognizing 'Hangul', the Korean alphabet, with a recognizing rate of 98.16%. The average execution time per letter of the system was about 76.5msec, a reasonable speed considering hand language translation is based on almost still images. We expect that this research will help communication between the hearing-impaired and other people who don't know hand language.