• Title/Summary/Keyword: 손동작 기반

Search Result 124, Processing Time 0.021 seconds

Vision based Fast Hand Motion Recognition Method for an Untouchable User Interface of Smart Devices (스마트 기기의 비 접촉 사용자 인터페이스를 위한 비전 기반 고속 손동작 인식 기법)

  • Park, Jae Byung
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.49 no.9
    • /
    • pp.300-306
    • /
    • 2012
  • In this paper, we propose a vision based hand motion recognition method for an untouchable user interface of smart devices. First, an original color image is converted into a gray scaled image and its spacial resolution is reduced, taking the small memory and low computational power of smart devices into consideration. For robust recognition of hand motions through separation of horizontal and vertical motions, the horizontal principal area (HPA) and the vertical principal area (VPA) are defined respectively. From the difference images of the consecutively obtained images, the center of gravity (CoG) of the significantly changed pixels caused by hand motions is obtained, and the direction of hand motion is detected by defining the least mean squared line for the CoG in time. For verifying the feasibility of the proposed method, the experiments are carried out with a vision system.

Recognition of Natural Hand Gesture by Using HMM (HMM을 이용한 자연스러운 손동작 인식)

  • Kim, A-Ram;Rhee, Sang-Yong
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.22 no.5
    • /
    • pp.639-645
    • /
    • 2012
  • In this paper, we propose a method that gives motion command to a mobile robot to recognize human being's hand gesture. Former way of the robot-controlling system with the movement of hand used several kinds of pre-arranged gesture, therefore the ordering motion was unnatural. Also it forced people to study the pre-arranged gesture, making it more inconvenient. To solve this problem, there are many researches going on trying to figure out another way to make the machine to recognize the movement of the hand. In this paper, we used third-dimensional camera to obtain the color and depth data, which can be used to search the human hand and recognize its movement based on it. We used HMM method to make the proposed system to perceive the movement, then the observed data transfers to the robot making it to move at the direction where we want it to be.

A Implementation and Performance Analysis of Emotion Messenger Based on Dynamic Gesture Recognitions using WebCAM (웹캠을 이용한 동적 제스쳐 인식 기반의 감성 메신저 구현 및 성능 분석)

  • Lee, Won-Joo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.7
    • /
    • pp.75-81
    • /
    • 2010
  • In this paper, we propose an emotion messenger which recognizes face or hand gestures of a user using a WebCAM, converts recognized emotions (joy, anger, grief, happiness) to flash-cones, and transmits them to the counterpart. This messenger consists of face recognition module, hand gesture recognition module, and messenger module. In the face recognition module, it converts each region of the eye and the mouth to a binary image and recognizes wink, kiss, and yawn according to shape change of the eye and the mouth. In hand gesture recognition module, it recognizes gawi-bawi-bo according to the number of fingers it has recognized. In messenger module, it converts wink, kiss, and yawn recognized by the face recognition module and gawi-bawi-bo recognized by the hand gesture recognition module to flash-cones and transmits them to the counterpart. Through simulation, we confirmed that CPU share ratio of the emotion messenger is minimized. Moreover, with respect to recognition ratio, we show that the hand gesture recognition module performs better than the face recognition module.

HMM-based Intent Recognition System using 3D Image Reconstruction Data (3차원 영상복원 데이터를 이용한 HMM 기반 의도인식 시스템)

  • Ko, Kwang-Enu;Park, Seung-Min;Kim, Jun-Yeup;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.22 no.2
    • /
    • pp.135-140
    • /
    • 2012
  • The mirror neuron system in the cerebrum, which are handled by visual information-based imitative learning. When we observe the observer's range of mirror neuron system, we can assume intention of performance through progress of neural activation as specific range, in include of partially hidden range. It is goal of our paper that imitative learning is applied to 3D vision-based intelligent system. We have experiment as stereo camera-based restoration about acquired 3D image our previous research Using Optical flow, unscented Kalman filter. At this point, 3D input image is sequential continuous image as including of partially hidden range. We used Hidden Markov Model to perform the intention recognition about performance as result of restoration-based hidden range. The dynamic inference function about sequential input data have compatible properties such as hand gesture recognition include of hidden range. In this paper, for proposed intention recognition, we already had a simulation about object outline and feature extraction in the previous research, we generated temporal continuous feature vector about feature extraction and when we apply to Hidden Markov Model, make a result of simulation about hand gesture classification according to intention pattern. We got the result of hand gesture classification as value of posterior probability, and proved the accuracy outstandingness through the result.

Wearable Input Device for Incorporating Real-World into Virtual Reality (가상현실과 실세계 정합을 위한 웨어러블 입력장치)

  • Park, Ki-Hong;Lee, Hyun-Jik;Kim, Yoon-Ho
    • Journal of Advanced Navigation Technology
    • /
    • v.15 no.2
    • /
    • pp.319-325
    • /
    • 2011
  • In this paper, we propose the matching model between virtual reality and the real-world for peoples with limited mobility. The proposed matching model is consist of four parts: wearable input device-based PC control, hand-motion pattern recognition, application software, and matching between virtual reality and the real-world. To recognition mouse functions and hand-motion patterns from six-axis coordinate of wearable input device, RF communication is used. In addition, to easily control the real-world, virtual reality has been implemented with realism of the real-world. Some experiments are conducted so as to verify the proposed model, and as a result, hand-motion recognition as well as virtual reality control are well performed.

Automatic Coarticulation Detection for Continuous Sign Language Recognition (연속된 수화 인식을 위한 자동화된 Coarticulation 검출)

  • Yang, Hee-Deok;Lee, Seong-Whan
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.1
    • /
    • pp.82-91
    • /
    • 2009
  • Sign language spotting is the task of detecting and recognizing the signs in a signed utterance. The difficulty of sign language spotting is that the occurrences of signs vary in both motion and shape. Moreover, the signs appear within a continuous gesture stream, interspersed with transitional movements between signs in a vocabulary and non-sign patterns(which include out-of-vocabulary signs, epentheses, and other movements that do not correspond to signs). In this paper, a novel method for designing a threshold model in a conditional random field(CRF) model is proposed. The proposed model performs an adaptive threshold for distinguishing between signs in the vocabulary and non-sign patterns. A hand appearance-based sign verification method, a short-sign detector, and a subsign reasoning method are included to further improve sign language spotting accuracy. Experimental results show that the proposed method can detect signs from continuous data with an 88% spotting rate and can recognize signs from isolated data with a 94% recognition rate, versus 74% and 90% respectively for CRFs without a threshold model, short-sign detector, subsign reasoning, and hand appearance-based sign verification.

Implement of Finger-Gesture Remote Controller using the Moving Direction Recognition of Single (단일 형상의 이동 방향 인식에 의한 손 동작 리모트 컨트롤러 구현)

  • Jang, Myeong-Soo;Lee, Woo-Beom
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.13 no.4
    • /
    • pp.91-97
    • /
    • 2013
  • A finger-gesture remote controller using the single camera is implemented in this paper, which is base on the recognition of finger number and finger moving direction. Proposed method uses the transformed YCbCr color-difference information to extract the hand region effectively. The number and position of finger are computer by using a double circle tracing method. Specially, a user continuous-command can be performed repeatedly by recognizing the finger-gesture direction of single shape. The position information of finger enables a user command to amplify a same command in the User eXperience. Also, all processing tasks are implemented by using the Intel OpenCV library and C++ language. In order to evaluate the performance of the our proposed method, after applying to the commercial video player software as a remote controller. As a result, the proposed method showed the average 89% recognition ratio by the user command-mode.

An Implementation of Real-Time Numeral Recognizer Based on Hand Gesture Using Both Gradient and Positional Information (기울기와 위치 정보를 이용한 손동작기반 실시간 숫자 인식기 구현)

  • Kim, Ji-Ho;Park, Yang-Woo;Han, Kyu-Phil
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.3
    • /
    • pp.199-204
    • /
    • 2013
  • An implementation method of real-time numeral recognizer based on gesture is presented in this paper for various information devices. The proposed algorithm steadily captures the motion of a hand on 3D open space with the Kinect sensor. The captured hand motion is simplified with PCA, in order to preserve the trace consistency and to minimize the trace variations due to noises and size changes. In addition, we also propose a new HMM using both the gradient and the positional features of the simplified hand stroke. As the result, the proposed algorithm has robust characteristics to the variations of the size and speed of hand motion. The recognition rate is increased up to 30%, because of this combined model. Experimental results showed that the proposed algorithm gives a high recognition rate about 98%.

Implementing Leap-Motion-Based Interface for Enhancing the Realism of Shooter Games (슈팅 게임의 현실감 개선을 위한 립모션 기반 인터페이스 구현)

  • Shin, Inho;Cheon, Donghun;Park, Hanhoon
    • Journal of the HCI Society of Korea
    • /
    • v.11 no.1
    • /
    • pp.5-10
    • /
    • 2016
  • This paper aims at providing a shooter game interface which enhances the game's realism by recognizing user's hand gestures using the Leap Motion. In this paper, we implemented the functions such as shooting, moving, viewpoint change, and zoom in/out, which are necessary in shooter games, and confirmed through user test that the game interface using familiar and intuitive hand gestures is superior to the conventional mouse/keyboard in terms of ease-to-manipulation, interest, extendability, and so on. Specifically, the user satisfaction index(1~5) was 3.02 on average when using the mouse/keyboard interface and 3.57 on average when using the proposed hand gesture interface.

Dynamic Training Algorithm for Hand Gesture Recognition System (손동작 인식 시스템을 위한 동적 학습 알고리즘)

  • Kim, Moon-Hwan;hwang, suen ki;Bae, Cheol-Soo
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.2 no.2
    • /
    • pp.51-56
    • /
    • 2009
  • We developed an augmented new reality tool for vision-based hand gesture recognition in a camera-projector system. Our recognition method uses modified Fourier descriptors for the classification of static hand gestures. Hand segmentation is based on a background subtraction method, which is improved to handle background changes. Most of the recognition methods are trained and tested by the same service-person, and training phase occurs only preceding the interaction. However, there are numerous situations when several untrained users would like to use gestures for the interaction. In our new practical approach the correction of faulty detected gestures is done during the recognition itself. Our main result is the quick on-line adaptation to the gestures of a new user to achieve user-independent gesture recognition.

  • PDF