• Title/Summary/Keyword: Hand Gesture Detection

Search Result 61, Processing Time 0.031 seconds

Vision and Depth Information based Real-time Hand Interface Method Using Finger Joint Estimation (손가락 마디 추정을 이용한 비전 및 깊이 정보 기반 손 인터페이스 방법)

  • Park, Kiseo;Lee, Daeho;Park, Youngtae
    • Journal of Digital Convergence
    • /
    • v.11 no.7
    • /
    • pp.157-163
    • /
    • 2013
  • In this paper, we propose a vision and depth information based real-time hand gesture interface method using finger joint estimation. For this, the areas of left and right hands are segmented after mapping of the visual image and depth information image, and labeling and boundary noise removal is performed. Then, the centroid point and rotation angle of each hand area are calculated. Afterwards, a circle is expanded at following pattern from a centroid point of the hand to detect joint points and end points of the finger by obtaining the midway points of the hand boundary crossing and the hand model is recognized. Experimental results that our method enabled fingertip distinction and recognized various hand gestures fast and accurately. As a result of the experiment on various hand poses with the hidden fingers using both hands, the accuracy showed over 90% and the performance indicated over 25 fps. The proposed method can be used as a without contacts input interface in HCI control, education, and game applications.

Segmentation of Pointed Objects for Service Robots (서비스 로봇을 위한 지시 물체 분할 방법)

  • Kim, Hyung-O;Kim, Soo-Hwan;Kim, Dong-Hwan;Park, Sung-Kee
    • The Journal of Korea Robotics Society
    • /
    • v.4 no.2
    • /
    • pp.139-146
    • /
    • 2009
  • This paper describes how a person extracts a unknown object with pointing gesture while interacting with a robot. Using a stereo vision sensor, our proposed method consists of two stages: the detection of the operators' face, the estimation of the pointing direction, and the extraction of the pointed object. The operator's face is recognized by using the Haar-like features. And then we estimate the 3D pointing direction from the shoulder-to-hand line. Finally, we segment an unknown object from 3D point clouds in estimated region of interest. On the basis of this proposed method, we implemented an object registration system with our mobile robot and obtained reliable experimental results.

  • PDF

Detection of Hand Gesture and its Description for Wearable Applications in IoMTW (IoMTW 에서의 웨어러블 응용을 위한 손 제스처 검출 및 서술)

  • Yang, Anna;Park, Do-Hyun;Chun, Sungmoon;Kim, Jae-Gon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2016.06a
    • /
    • pp.338-339
    • /
    • 2016
  • 손 제스처는 스마트 글래스 등 웨어러블 기기의 NUI(Natural User Interface)로 부각되고 있으며 이를 위해서는 손 제스처 검출 및 인식 기능이 요구된다. 또한, 최근 MPEG 에서는 IoT(Internet of Thing) 환경에서의 미디어 소비를 위한 표준으로 IoMTW(Media-centric IoT and Wearable) 사전 탐색이 진행되고 있으며, 손 제스처를 표현하기 위한 메타데이터도 하나의 표준 기술요소로 논의되고 있다. 본 논문에서는 스마트 글래스 환경에서의 손 제스처 인식을 위한 과정으로 스테레오 영상을 통한 손 윤곽선 검출과 이를 메타데이터로 서술하기 위하여 베지에(Bezier) 곡선으로 표현하는 기법을 제시한다.

  • PDF

Detection of Hand Gesture and its Recognition for Wearable Applications in IoMTW (IoMTW 에서의 웨어러블 응용을 위한 손 제스처 검출 및 인식)

  • Yang, Anna;Hong, Jeong Hun;Kang, Han;Chun, Sungmoon;Kim, Jae-Gon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2016.11a
    • /
    • pp.33-35
    • /
    • 2016
  • 손 제스처는 스마트 글라스 등 웨어러블 기기의 NUI(Natural User Interface)를 구현하기 위한 수단으로 각광받고 있다. 최근 MPEG 에서는 IoT(Internet of Things) 및 웨어러블 환경에서의 미디어 소비를 지원하기 위한 IoMTW(Internet of Media-Things and Wearables) 표준화를 진행하고 있다. 본 논문에서는 손 제스처를 웨어러블 기기의 NUI 로 사용하여 웨어러블 기기 제어 및 미디어 소비를 제어하기 위한 손 제스처 검출과 인식 기법를 제시한다. 제시된 기법은 스테레오 영상으로부터 깊이 정보와 색 정보를 이용하여 손 윤곽선을 검출하여 이를 베지어(Bezier) 곡선으로 표현하고, 표현된 손 윤곽선으로부터 손가락 수 등의 특징을 바탕으로 제스처를 인식한다.

  • PDF

Hand Motion Gesture Recognition at A Distance with Skin-color Detection and Feature Points Tracking (피부색 검출 및 특징점 추적을 통한 원거리 손 모션 제스처 인식)

  • Yun, Jong-Hyun;Kim, Sung-Young
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2012.11a
    • /
    • pp.594-596
    • /
    • 2012
  • 본 논문에서는 손 모션에 대하여 피부색 검출을 기반으로 전역적인 모션을 추적하고 모션 벡터를 생성하여 제스처를 인식하는 방법을 제안한다. 추적을 위하여 Shi-Tomasi 특징점 검출 방법과 Lucas-Kanade 옵티컬 플로우 추정 방법을 사용한다. 손 모션을 추적하는 경우 손의 모양이 다양하게 변화하므로 초기에 검출된 특징점을 계속적으로 추적하는 일반적인 방법으로는 손의 모션을 제대로 추적할 수 없다. 이에 본 논문에서는 프레임마다 새로운 특징점을 검출한 후 옵티컬 플로우를 추정하고 이상치(outlier)를 제거하여 손 모양의 변화에도 추적을 통한 모션 벡터 생성이 가능하도록 한다. 모션 벡터들로 인공 신경망을 사용한 판별 과정을 수행하여 최종적으로 손 모션 제스처에 대한 인식이 가능하도록 한다.

Face Detection-based Hand Gesture Recognition in Color and Depth Images (색상 및 거리 영상에서의 얼굴검출 기반 손 제스처 인식)

  • Jeon, Hun-Ki;Ko, Jaepil
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2012.11a
    • /
    • pp.580-582
    • /
    • 2012
  • 본 논문에서는 얼굴검출을 통한 실시간 피부색 모델링과 거리정보를 결합하여 손 영역을 검출하고 손 움직임에 따른 방향 및 원 제스처 인식을 위한 규칙 기반 인식방법을 제안한다. 기존과는 달리 손좌표를 사용하는 대신 기존 프레임과 현재 프레임에서의 손 좌표 차이를 이용하여 제스처 구간을 설정하고 자연스러운 제스처 동작에서의 속도변화를 고려할 수 있도록 한다. 실험 데이터는 5명을 대상으로 4방향과 원을 포함하여 총 5가지 제스처를 10회씩 실행하여 획득하였다. 이들 데이터에 대한 인식 실험에서 97%의 인식률을 보였다.

Fast Convergence GRU Model for Sign Language Recognition

  • Subramanian, Barathi;Olimov, Bekhzod;Kim, Jeonghong
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.9
    • /
    • pp.1257-1265
    • /
    • 2022
  • Recognition of sign language is challenging due to the occlusion of hands, accuracy of hand gestures, and high computational costs. In recent years, deep learning techniques have made significant advances in this field. Although these methods are larger and more complex, they cannot manage long-term sequential data and lack the ability to capture useful information through efficient information processing with faster convergence. In order to overcome these challenges, we propose a word-level sign language recognition (SLR) system that combines a real-time human pose detection library with the minimized version of the gated recurrent unit (GRU) model. Each gate unit is optimized by discarding the depth-weighted reset gate in GRU cells and considering only current input. Furthermore, we use sigmoid rather than hyperbolic tangent activation in standard GRUs due to performance loss associated with the former in deeper networks. Experimental results demonstrate that our pose-based optimized GRU (Pose-OGRU) outperforms the standard GRU model in terms of prediction accuracy, convergency, and information processing capability.

Hand Motion Signal Extraction Based on Electric Field Sensors Using PLN Spectrum Analysis (PLN 성분 분석을 통한 전기장센서 기반 손동작신호 추출)

  • Jeong, Seonil;Kim, Youngchul
    • Smart Media Journal
    • /
    • v.9 no.4
    • /
    • pp.97-101
    • /
    • 2020
  • Using passive electric field sensor which operates in non-contact mode, we can measure the electric potential induced from the change of electric charges on a sensor caused by the movement of human body or hands. In this study, we propose a new method, which utilizes PLN induced to the sensor around the moving object, to detect one's hand movement and extract gesture frames from the detected signals. Signals from the EPS sensors include a large amount of power line noise usually existing in the places such as rooms or buildings. Using the fact that the PLN is shielded in part by human access to the sensor, signals caused by motion or hand movement are detected. PLN consists mainly of signals with frequency of 60 Hz and its harmonics. In our proposed method, signals only 120 Hz component in frequency domain are chosen selectively and exclusively utilized for detection of hand movement. We use FFT to measure a spectral-separated frequency signal. The signals obtained from sensors in this way are continued to be compared with the threshold preset in advance. Once motion signals are detected passing throng the threshold, we determine the motion frame based on period between the first threshold passing time and the last one. The motion detection rate of our proposed method was about 90% while the correct frame extraction rate was about 85%. The method like our method, which use PLN signal in order to extract useful data about motion movement from non-contact mode EPS sensors, has been rarely reported or published in recent. This research results can be expected to be useful especially in circumstance of having surrounding PLN.

Object Detection and Optical Character Recognition for Mobile-based Air Writing (모바일 기반 Air Writing을 위한 객체 탐지 및 광학 문자 인식 방법)

  • Kim, Tae-Il;Ko, Young-Jin;Kim, Tae-Young
    • The Journal of Korean Institute of Next Generation Computing
    • /
    • v.15 no.5
    • /
    • pp.53-63
    • /
    • 2019
  • To provide a hand gesture interface through deep learning in mobile environments, research on the light-weighting of networks is essential for high recognition rates while at the same time preventing degradation of execution speed. This paper proposes a method of real-time recognition of written characters in the air using a finger on mobile devices through the light-weighting of deep-learning model. Based on the SSD (Single Shot Detector), which is an object detection model that utilizes MobileNet as a feature extractor, it detects index finger and generates a result text image by following fingertip path. Then, the image is sent to the server to recognize the characters based on the learned OCR model. To verify our method, 12 users tested 1,000 words using a GALAXY S10+ and recognized their finger with an average accuracy of 88.6%, indicating that recognized text was printed within 124 ms and could be used in real-time. Results of this research can be used to send simple text messages, memos, and air signatures using a finger in mobile environments.

Automatic Coarticulation Detection for Continuous Sign Language Recognition (연속된 수화 인식을 위한 자동화된 Coarticulation 검출)

  • Yang, Hee-Deok;Lee, Seong-Whan
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.1
    • /
    • pp.82-91
    • /
    • 2009
  • Sign language spotting is the task of detecting and recognizing the signs in a signed utterance. The difficulty of sign language spotting is that the occurrences of signs vary in both motion and shape. Moreover, the signs appear within a continuous gesture stream, interspersed with transitional movements between signs in a vocabulary and non-sign patterns(which include out-of-vocabulary signs, epentheses, and other movements that do not correspond to signs). In this paper, a novel method for designing a threshold model in a conditional random field(CRF) model is proposed. The proposed model performs an adaptive threshold for distinguishing between signs in the vocabulary and non-sign patterns. A hand appearance-based sign verification method, a short-sign detector, and a subsign reasoning method are included to further improve sign language spotting accuracy. Experimental results show that the proposed method can detect signs from continuous data with an 88% spotting rate and can recognize signs from isolated data with a 94% recognition rate, versus 74% and 90% respectively for CRFs without a threshold model, short-sign detector, subsign reasoning, and hand appearance-based sign verification.