• Title/Summary/Keyword: Meaningful gestures

Search Result 15, Processing Time 0.028 seconds

A Structure and Framework for Sign Language Interaction

  • Kim, Soyoung;Pan, Younghwan
    • Journal of the Ergonomics Society of Korea
    • /
    • v.34 no.5
    • /
    • pp.411-426
    • /
    • 2015
  • Objective: The goal of this thesis is to design the interaction structure and framework of system to recognize sign language. Background: The sign language of meaningful individual gestures is combined to construct a sentence, so it is difficult to interpret and recognize the meaning of hand gesture for system, because of the sequence of continuous gestures. This being so, in order to interpret the meaning of individual gesture correctly, the interaction structure and framework are needed so that they can segment the indication of individual gesture. Method: We analyze 700 sign language words to structuralize the sign language gesture interaction. First of all, we analyze the transformational patterns of the hand gesture. Second, we analyze the movement of the transformational patterns of the hand gesture. Third, we analyze the type of other gestures except hands. Based on this, we design a framework for sign language interaction. Results: We elicited 8 patterns of hand gesture on the basis of the fact on whether the gesture has a change from starting point to ending point. And then, we analyzed the hand movement based on 3 elements: patterns of movement, direction, and whether hand movement is repeating or not. Moreover, we defined 11 movements of other gestures except hands and classified 8 types of interaction. The framework for sign language interaction, which was designed based on this mentioned above, applies to more than 700 individual gestures of the sign language, and can be classified as an individual gesture in spite of situation which has continuous gestures. Conclusion: This study has structuralized in 3 aspects defined to analyze the transformational patterns of the starting point and the ending point of hand shape, hand movement, and other gestures except hands for sign language interaction. Based on this, we designed the framework that can recognize the individual gestures and interpret the meaning more accurately, when meaningful individual gesture is input sequence of continuous gestures. Application: When we develop the system of sign language recognition, we can apply interaction framework to it. Structuralized gesture can be used for using database of sign language, inventing an automatic recognition system, and studying on the action gestures in other areas.

An Extraction Method of Meaningful Hand Gesture for a Robot Control (로봇 제어를 위한 의미 있는 손동작 추출 방법)

  • Kim, Aram;Rhee, Sang-Yong
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.27 no.2
    • /
    • pp.126-131
    • /
    • 2017
  • In this paper, we propose a method to extract meaningful motion among various kinds of hand gestures on giving commands to robots using hand gestures. On giving a command to the robot, the hand gestures of people can be divided into a preparation one, a main one, and a finishing one. The main motion is a meaningful one for transmitting a command to the robot in this process, and the other operation is a meaningless auxiliary operation to do the main motion. Therefore, it is necessary to extract only the main motion from the continuous hand gestures. In addition, people can move their hands unconsciously. These actions must also be judged by the robot with meaningless ones. In this study, we extract human skeleton data from a depth image obtained by using a Kinect v2 sensor and extract location data of hands data from them. By using the Kalman filter, we track the location of the hand and distinguish whether hand motion is meaningful or meaningless to recognize the hand gesture by using the hidden markov model.

Hybrid HMM for Transitional Gesture Classification in Thai Sign Language Translation

  • Jaruwanawat, Arunee;Chotikakamthorn, Nopporn;Werapan, Worawit
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1106-1110
    • /
    • 2004
  • A human sign language is generally composed of both static and dynamic gestures. Each gesture is represented by a hand shape, its position, and hand movement (for a dynamic gesture). One of the problems found in automated sign language translation is on segmenting a hand movement that is part of a transitional movement from one hand gesture to another. This transitional gesture conveys no meaning, but serves as a connecting period between two consecutive gestures. Based on the observation that many dynamic gestures as appeared in Thai sign language dictionary are of quasi-periodic nature, a method was developed to differentiate between a (meaningful) dynamic gesture and a transitional movement. However, there are some meaningful dynamic gestures that are of non-periodic nature. Those gestures cannot be distinguished from a transitional movement by using the signal quasi-periodicity. This paper proposes a hybrid method using a combination of the periodicity-based gesture segmentation method with a HMM-based gesture classifier. The HMM classifier is used here to detect dynamic signs of non-periodic nature. Combined with the periodic-based gesture segmentation method, this hybrid scheme can be used to identify segments of a transitional movement. In addition, due to the use of quasi-periodic nature of many dynamic sign gestures, dimensionality of the HMM part of the proposed method is significantly reduced, resulting in computational saving as compared with a standard HMM-based method. Through experiment with real measurement, the proposed method's recognition performance is reported.

  • PDF

Correlation Analysis between Cognitive function and Praxis tasks in the Elderly

  • Shin, Su-Jung
    • Journal of the Korea Society of Computer and Information
    • /
    • v.22 no.5
    • /
    • pp.51-56
    • /
    • 2017
  • The purpose of this study was to identify differences in cognitive function according to the presence or absence of apraxia and the tasks most relevant to the cognitive function among the various types of tasks in the apraxia test. The subjects were 42 community residents who participated in a cognitive rehabilitation program related to dementia in a Chungbuk area. MMSE-K and BCoS(Birmingham Cognitive Screen) apraxia test were administered to all subjects. The apraxia test includes three types of tasks, gesture production tasks that make meaningful movements according to verbal instructions, gesture recognition tasks that display behavior after make sense of meanings, and meaningless imitation task. Apraxia group(n=30, MMSE-K mean score: 25) showed lower cognitive function than group without apraxia(n=12, MMSE-K mean score: 28). All tasks in the apraxia test showed a significant correlation with cognitive function, but the meaningless imitation task had a negligible correlation. The apraxia test is a good way to assess cognitive function, and it may be more effective to use meaningful behavior to replace cognitive testing.

CNN-based Gesture Recognition using Motion History Image

  • Koh, Youjin;Kim, Taewon;Hong, Min;Choi, Yoo-Joo
    • Journal of Internet Computing and Services
    • /
    • v.21 no.5
    • /
    • pp.67-73
    • /
    • 2020
  • In this paper, we present a CNN-based gesture recognition approach which reduces the memory burden of input data. Most of the neural network-based gesture recognition methods have used a sequence of frame images as input data, which cause a memory burden problem. We use a motion history image in order to define a meaningful gesture. The motion history image is a grayscale image into which the temporal motion information is collapsed by synthesizing silhouette images of a user during the period of one meaningful gesture. In this paper, we first summarize the previous traditional approaches and neural network-based approaches for gesture recognition. Then we explain the data preprocessing procedure for making the motion history image and the neural network architecture with three convolution layers for recognizing the meaningful gestures. In the experiments, we trained five types of gestures, namely those for charging power, shooting left, shooting right, kicking left, and kicking right. The accuracy of gesture recognition was measured by adjusting the number of filters in each layer in the proposed network. We use a grayscale image with 240 × 320 resolution which defines one meaningful gesture and achieved a gesture recognition accuracy of 98.24%.

Recognition-Based Gesture Spotting for Video Game Interface (비디오 게임 인터페이스를 위한 인식 기반 제스처 분할)

  • Han, Eun-Jung;Kang, Hyun;Jung, Kee-Chul
    • Journal of Korea Multimedia Society
    • /
    • v.8 no.9
    • /
    • pp.1177-1186
    • /
    • 2005
  • In vision-based interfaces for video games, gestures are used as commands of the games instead of pressing down a keyboard or a mouse. In these Interfaces, unintentional movements and continuous gestures have to be permitted to give a user more natural interface. For this problem, this paper proposes a novel gesture spotting method that combines spotting with recognition. It recognizes the meaningful movements concurrently while separating unintentional movements from a given image sequence. We applied our method to the recognition of the upper-body gestures for interfacing between a video game (Quake II) and its user. Experimental results show that the proposed method is on average $93.36\%$ in spotting gestures from continuous gestures, confirming its potential for a gesture-based interface for computer games.

  • PDF

A Study on Gesture Recognition Using Principal Factor Analysis (주 인자 분석을 이용한 제스처 인식에 관한 연구)

  • Lee, Yong-Jae;Lee, Chil-Woo
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.8
    • /
    • pp.981-996
    • /
    • 2007
  • In this paper, we describe a method that can recognize gestures by obtaining motion features information with principal factor analysis from sequential gesture images. In the algorithm, firstly, a two dimensional silhouette region including human gesture is segmented and then geometric features are extracted from it. Here, global features information which is selected as some meaningful key feature effectively expressing gestures with principal factor analysis is used. Obtained motion history information representing time variation of gestures from extracted feature construct one gesture subspace. Finally, projected model feature value into the gesture space is transformed as specific state symbols by grouping algorithm to be use as input symbols of HMM and input gesture is recognized as one of the model gesture with high probability. Proposed method has achieved higher recognition rate than others using only shape information of human body as in an appearance-based method or extracting features intuitively from complicated gestures, because this algorithm constructs gesture models with feature factors that have high contribution rate using principal factor analysis.

  • PDF

An Experimental Research on the Usability of Indirect Control using Finger Gesture Interaction in Three Dimensional Space (3차원 공간에서 손가락 제스쳐 인터랙션을 이용한 간접제어의 사용성에 관한 실험연구)

  • Ham, Kyung Sun;Lee, Dahye;Hong, Hee Jung;Park, Sungjae;Kim, Jinwoo
    • The Journal of the Korea Contents Association
    • /
    • v.14 no.11
    • /
    • pp.519-532
    • /
    • 2014
  • The emerging technologies for the natural computer interaction can give manufacturers new opportunities of product innovation. This paper is the study on a method of human communication about a finger gestures interaction. As technological advance has been so rapid over the last few decades, the utilizing products or services will be soon popular. The purpose of this experiment are as follows; What is the usefulness of gesture interaction? What is the cognitive impact on gesture interaction users. The finger gestures interaction consist of poking, picking and grasping. By measuring each usability in 2D and 3D space, this study shows the effect of finger gestures interaction. The 2D and 3D experimental tool is developed by using LeapMotion technology. As a results, the experiments involved 48 subjects shows that there is no difference in usability between the gestures in 2D space but in 3D space, the meaningful difference has been found. In addition, all gestures express good usability in 2D space rather than 3D space. Especially, there are the attractive interest that using uni-finger is better than multi-fingers.

Gesture Spotting by Web-Camera in Arbitrary Two Positions and Fuzzy Garbage Model (임의 두 지점의 웹 카메라와 퍼지 가비지 모델을 이용한 사용자의 의미 있는 동작 검출)

  • Yang, Seung-Eun
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.1 no.2
    • /
    • pp.127-136
    • /
    • 2012
  • Many research of hand gesture recognition based on vision system have been conducted which enable user operate various electronic devices more easily. 3D position calculation and meaningful gesture classification from similar gestures should be executed to recognize hand gesture accurately. A simple and cost effective method of 3D position calculation and gesture spotting (a task to recognize meaningful gesture from other similar meaningless gestures) is described in this paper. 3D position is achieved by calculation of two cameras relative position through pan/tilt module and a marker regardless with the placed position. Fuzzy garbage model is proposed to provide a variable reference value to decide whether the user gesture is the command gesture or not. The reference is achieved from fuzzy command gesture model and fuzzy garbage model which returns the score that shows the degree of belonging to command gesture and garbage gesture respectively. Two-stage user adaptation is proposed that off-line (batch) adaptation for inter-personal difference and on-line (incremental) adaptation for intra-difference to enhance the performance. Experiment is conducted for 5 different users. The recognition rate of command (discriminate command gesture) is more than 95% when only one command like meaningless gesture exists and more than 85% when the command is mixed with many other similar gestures.

Generating Activity-based Diary from PC Usage Logs

  • Sadita, Lia;Kim, Hyoung-Nyoun;Park, Ji-Hyung
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2012.06b
    • /
    • pp.339-341
    • /
    • 2012
  • This paper presents a method for generating an autonomous activity-based diary in the environment including a personal computer (PC). In order to record a user's various tasks in front of a PC, we consider the contextual information such as current time, opened programs, and user interactions. As one modality for the user interaction, a motion sensor was applied to recognize a user's hand gestures in case that the activity is conducted without interaction between the user and the PC. Moreover, we propose a temporal clustering method to recapitulate the sequential and meaningful activity in the stream of extracted PC usage logs. By combining those two processes, we summarize the user activities in the PC environment.