• Title/Summary/Keyword: Hands Gesture

Search Result 60, Processing Time 0.025 seconds

Tracking and Recognizing Hand Gestures using Kalman Filter and Continuous Dynamic Programming (연속DP와 칼만필터를 이용한 손동작의 추적 및 인식)

  • 문인혁;금영광
    • Proceedings of the IEEK Conference
    • /
    • 2002.06c
    • /
    • pp.13-16
    • /
    • 2002
  • This paper proposes a method to track hand gesture and to recognize the gesture pattern using Kalman filter and continuous dynamic programming (CDP). The positions of hands are predicted by Kalman filter, and corresponding pixels to the hands are extracted by skin color filter. The center of gravity of the hands is the same as the input pattern vector. The input gesture is then recognized by matching with the reference gesture patterns using CDP. From experimental results to recognize circle shape gesture and intention gestures such as “Come on” and “Bye-bye”, we show the proposed method is feasible to the hand gesture-based human -computer interaction.

  • PDF

A Structure and Framework for Sign Language Interaction

  • Kim, Soyoung;Pan, Younghwan
    • Journal of the Ergonomics Society of Korea
    • /
    • v.34 no.5
    • /
    • pp.411-426
    • /
    • 2015
  • Objective: The goal of this thesis is to design the interaction structure and framework of system to recognize sign language. Background: The sign language of meaningful individual gestures is combined to construct a sentence, so it is difficult to interpret and recognize the meaning of hand gesture for system, because of the sequence of continuous gestures. This being so, in order to interpret the meaning of individual gesture correctly, the interaction structure and framework are needed so that they can segment the indication of individual gesture. Method: We analyze 700 sign language words to structuralize the sign language gesture interaction. First of all, we analyze the transformational patterns of the hand gesture. Second, we analyze the movement of the transformational patterns of the hand gesture. Third, we analyze the type of other gestures except hands. Based on this, we design a framework for sign language interaction. Results: We elicited 8 patterns of hand gesture on the basis of the fact on whether the gesture has a change from starting point to ending point. And then, we analyzed the hand movement based on 3 elements: patterns of movement, direction, and whether hand movement is repeating or not. Moreover, we defined 11 movements of other gestures except hands and classified 8 types of interaction. The framework for sign language interaction, which was designed based on this mentioned above, applies to more than 700 individual gestures of the sign language, and can be classified as an individual gesture in spite of situation which has continuous gestures. Conclusion: This study has structuralized in 3 aspects defined to analyze the transformational patterns of the starting point and the ending point of hand shape, hand movement, and other gestures except hands for sign language interaction. Based on this, we designed the framework that can recognize the individual gestures and interpret the meaning more accurately, when meaningful individual gesture is input sequence of continuous gestures. Application: When we develop the system of sign language recognition, we can apply interaction framework to it. Structuralized gesture can be used for using database of sign language, inventing an automatic recognition system, and studying on the action gestures in other areas.

On-line dyamic hand gesture recognition system for virtual reality using elementary component classifiers (기본 요소분류기를 이용한 가상현실용 실시간 동적 손 제스처 인식 시스템의 구현에 관한 연구)

  • 김종성;이찬수
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.34C no.9
    • /
    • pp.68-76
    • /
    • 1997
  • This paper presents a system which recognizes dynamic hand gestures for virtual reality(VR). A dynamic hand gesture is a method of communication for a computer and human who uses gestures, especially both hands and fingers. Since the human hands and fingers are not the same in physical dimension, the same form of a gestrue produced by two persons with their hands may not have the same numerical values which are obtained through electronic sensors. In this paper, we apply a fuzzy min-max neural network and feature analysis method using fuzzy logic for on-line pattern recognition.

  • PDF

On-line Korean Sing Language(KSL) Recognition using Fuzzy Min-Max Neural Network and feature Analysis

  • zeungnam Bien;Kim, Jong-Sung
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1995.10b
    • /
    • pp.85-91
    • /
    • 1995
  • This paper presents a system which recognizes the Korean Sign Language(KSL) and translates into normal Korean speech. A sign language is a method of communication for the deaf-mute who uses gestures, especially both hands and fingers. Since the human hands and fingers are not the same in physical dimension, the same form of a gesture produced by two signers with their hands may not produce the same numerical values when obtained through electronic sensors. In this paper, we propose a dynamic gesture recognition method based on feature analysis for efficient classification of hand motions, and on a fuzzy min-max neural network for on-line pattern recognition.

  • PDF

Automatic Gesture Recognition for Human-Machine Interaction: An Overview

  • Nataliia, Konkina
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.1
    • /
    • pp.129-138
    • /
    • 2022
  • With the increasing reliance of computing systems in our everyday life, there is always a constant need to improve the ways users can interact with such systems in a more natural, effective, and convenient way. In the initial computing revolution, the interaction between the humans and machines have been limited. The machines were not necessarily meant to be intelligent. This begged for the need to develop systems that could automatically identify and interpret our actions. Automatic gesture recognition is one of the popular methods users can control systems with their gestures. This includes various kinds of tracking including the whole body, hands, head, face, etc. We also touch upon a different line of work including Brain-Computer Interface (BCI), Electromyography (EMG) as potential additions to the gesture recognition regime. In this work, we present an overview of several applications of automated gesture recognition systems and a brief look at the popular methods employed.

Vision-Based Two-Arm Gesture Recognition by Using Longest Common Subsequence (최대 공통 부열을 이용한 비전 기반의 양팔 제스처 인식)

  • Choi, Cheol-Min;Ahn, Jung-Ho;Byun, Hye-Ran
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.33 no.5C
    • /
    • pp.371-377
    • /
    • 2008
  • In this paper, we present a framework for vision-based two-arm gesture recognition. To capture the motion information of the hands, we perform color-based tracking algorithm using adaptive kernel for each frame. And a feature selection algorithm is performed to classify the motion information into four different phrases. By using gesture phrase information, we build a gesture model which consists of a probability of the symbols and a symbol sequence which is learned from the longest common subsequence. Finally, we present a similarity measurement for two-arm gesture recognition by using the proposed gesture models. In the experimental results, we show the efficiency of the proposed feature selection method, and the simplicity and the robustness of the recognition algorithm.

A Compensation Algorithm for the Position of User Hands Based on Moving Mean-Shift for Gesture Recognition in HRI System (HRI 시스템에서 제스처 인식을 위한 Moving Mean-Shift 기반 사용자 손 위치 보정 알고리즘)

  • Kim, Tae-Wan;Kwon, Soon-Ryang;Lee, Dong Myung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.40 no.5
    • /
    • pp.863-870
    • /
    • 2015
  • A Compensation Algorithm for The Position of the User Hands based on the Moving Mean-Shift ($CAPUH_{MMS}$) in Human Robot Interface (HRI) System running the Kinect sensor is proposed in order to improve the performance of the gesture recognition is proposed in this paper. The average error improvement ratio of the trajectories ($AEIR_{TJ}$) in left-right movements of hands for the $CAPUH_{MMS}$ is compared with other compensation algorithms such as the Compensation Algorithm based on the Compensation Algorithm based on the Kalman Filter ($CA_{KF}$) and the Compensation Algorithm based on Least-Squares Method ($CA_{LSM}$) by the developed realtime performance simulator. As a result, the $AEIR_{TJ}$ in up-down movements of hands of the $CAPUH_{MMS}$ is measured as 19.35%, it is higher value compared with that of the $CA_{KF}$ and the $CA_{LSM}$ as 13.88% and 16.68%, respectively.

Analysis of Face Direction and Hand Gestures for Recognition of Human Motion (인간의 행동 인식을 위한 얼굴 방향과 손 동작 해석)

  • Kim, Seong-Eun;Jo, Gang-Hyeon;Jeon, Hui-Seong;Choe, Won-Ho;Park, Gyeong-Seop
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.7 no.4
    • /
    • pp.309-318
    • /
    • 2001
  • In this paper, we describe methods that analyze a human gesture. A human interface(HI) system for analyzing gesture extracts the head and hand regions after taking image sequence of and operators continuous behavior using CCD cameras. As gestures are accomplished with operators head and hands motion, we extract the head and hand regions to analyze gestures and calculate geometrical information of extracted skin regions. The analysis of head motion is possible by obtaining the face direction. We assume that head is ellipsoid with 3D coordinates to locate the face features likes eyes, nose and mouth on its surface. If was know the center of feature points, the angle of the center in the ellipsoid is the direction of the face. The hand region obtained from preprocessing is able to include hands as well as arms. For extracting only the hand region from preprocessing, we should find the wrist line to divide the hand and arm regions. After distinguishing the hand region by the wrist line, we model the hand region as an ellipse for the analysis of hand data. Also, the finger part is represented as a long and narrow shape. We extract hand information such as size, position, and shape.

  • PDF

On-line dynamic hand gesture recognition system for the korean sign language (KSL) (한글 수화용 동적 손 제스처의 실시간 인식 시스템의 구현에 관한 연구)

  • Kim, Jong-Sung;Lee, Chan-Su;Jang, Won;Bien, Zeungnam
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.34C no.2
    • /
    • pp.61-70
    • /
    • 1997
  • Human-hand gestures have been used a means of communication among people for a long time, being interpreted as streams of tokens for a language. The signed language is a method of communication for hearing impaired person. Articulated gestures and postures of hands and fingers are commonly used for the signed language. This paper presents a system which recognizes the korean sign language (KSL) and translates the recognition results into a normal korean text and sound. A pair of data-gloves are used a sthe sensing device for detecting motions of hands and fingers. In this paper, we propose a dynamic gesture recognition mehtod by employing a fuzzy feature analysis method for efficient classification of hand motions, and applying a fuzzy min-max neural network to on-line pattern recognition.

  • PDF

Virtual Environment Interfacing based on State Automata and Elementary Classifiers (상태 오토마타와 기본 요소분류기를 이용한 가상현실용 실시간 인터페이싱)

  • Kim, Jong-Sung;Lee, Chan-Su;Song, Kyung-Joon;Min, Byung-Eui;Park, Chee-Hang
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.12
    • /
    • pp.3033-3044
    • /
    • 1997
  • This paper presents a system which recognizes dynamic hand gesture for virtual reality (VR). A dynamic hand gesture is a method of communication for human and computer who uses gestures, especially both hands and fingers. Since the human hands and fingers are not the same in physical dimension, the produced by two persons with their hands may not have the same numerical values where obtained through electronic sensors. To recognize meaningful gesture from continuous gestures which have no token of beginning and end, this system segments current motion states using the state automata. In this paper, we apply a fuzzy min-max neural network and feature analysis method using fuzzy logic for on-line pattern recognition.

  • PDF