• Title/Summary/Keyword: Gestures

Search Result 472, Processing Time 0.02 seconds

Generation of Robot Facial Gestures based on Facial Actions and Animation Principles (Facial Actions 과 애니메이션 원리에 기반한 로봇의 얼굴 제스처 생성)

  • Park, Jeong Woo;Kim, Woo Hyun;Lee, Won Hyong;Lee, Hui Sung;Chung, Myung Jin
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.5
    • /
    • pp.495-502
    • /
    • 2014
  • This paper proposes a method to generate diverse robot facial expressions and facial gestures in order to help long-term HRI. First, nine basic dynamics for diverse robot facial expressions are determined based on the dynamics of human facial expressions and principles of animation for even identical emotions. In the second stage, facial actions are added to express facial gestures such as sniffling or wailing loudly corresponding to sadness, laughing aloud or smiling corresponding to happiness, etc. To evaluate the effectiveness of our approach, we compared the facial expressions of the developed robot when the proposed method is used or not. The results of the survey showed that the proposed method can help robots generate more realistic facial expressions.

A Joystick-driven Mouse Controlling Method using Hand Gestures (손 제스쳐를 이용한 조이스틱 방식의 마우스제어 방법)

  • Jung, Jin-Young;Kim, Jung-In
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.1
    • /
    • pp.60-67
    • /
    • 2016
  • PC users have long been controlling their computers using input devices such as mouse and keyboard. To improve inconveniences of these devices, the method of screen-touching has widely been used these days, and devices recognizing human gestures are being developed one after another. Fox example, Kinect, developed and distributed by Microsoft, is a non-contact input device that recognizes human gestures through motion-recognizing sensors, thus replacing the mouse as an input device. However, when controlling the mouse on a large screen, it suffers from the problem of requiring large motions in order to move the mouse pointer to the edges of the screen. In this paper, we propose a joystick-driven mouse-controlling method which enables the user to move the mouse pointer to the corners of the screen with small motions. The experimental results show that movements of the user's palm within the range of 30 cm ensure movements of the mouse pointer to the edges of the screen.

Dynamic Association and Natural Interaction for Multi-Displays Using Smart Devices (다수의 스마트 디바이스를 활용한 멀티 디스플레이 동적 생성 및 인터랙션)

  • Kim, Minseok;Lee, Jae Yeol
    • Korean Journal of Computational Design and Engineering
    • /
    • v.20 no.4
    • /
    • pp.337-347
    • /
    • 2015
  • This paper presents a dynamic association and natural interaction method for multi-displays composed of smart devices. Users can intuitively associate relations among smart devices by shake gestures, flexibly modify the layout of the display by tilt gestures, and naturally interact with the multi-display by multi-touch interactions. First of all, users shake their smart devices to create and bind a group for a multi-display with a matrix configuration in an ad-hoc and collaborative situation. After the creation of the group, if needed, their display layout can be flexibly changed by tilt gestures that move the tilted device to the nearest vacant cell in the matrix configuration. During the tilt gestures, the system automatically modifies the relation, view, and configuration of the multi-display. Finally, users can interact with the multi-display through multi-touch interactions just as they interact with a single large display. Furthermore, depending on the context or role, synchronous or asynchronous mode is available to them for providing a split view or another UI. We will show the effectiveness and advantages of the proposed approach by demonstrating implementation results and evaluating the method by the usability study.

A Framework for Designing Closed-loop Hand Gesture Interface Incorporating Compatibility between Human and Monocular Device

  • Lee, Hyun-Soo;Kim, Sang-Ho
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.4
    • /
    • pp.533-540
    • /
    • 2012
  • Objective: This paper targets a framework of a hand gesture based interface design. Background: While a modeling of contact-based interfaces has focused on users' ergonomic interface designs and real-time technologies, an implementation of a contactless interface needs error-free classifications as an essential prior condition. These trends made many research studies concentrate on the designs of feature vectors, learning models and their tests. Even though there have been remarkable advances in this field, the ignorance of ergonomics and users' cognitions result in several problems including a user's uneasy behaviors. Method: In order to incorporate compatibilities considering users' comfortable behaviors and device's classification abilities simultaneously, classification-oriented gestures are extracted using the suggested human-hand model and closed-loop classification procedures. Out of the extracted gestures, the compatibility-oriented gestures are acquired though human's ergonomic and cognitive experiments. Then, the obtained hand gestures are converted into a series of hand behaviors - Handycon - which is mapped into several functions in a mobile device. Results: This Handycon model guarantees users' easy behavior and helps fast understandings as well as the high classification rate. Conclusion and Application: The suggested framework contributes to develop a hand gesture-based contactless interface model considering compatibilities between human and device. The suggested procedures can be applied effectively into other contactless interface designs.

The Effect of Visual Feedback on One-hand Gesture Performance in Vision-based Gesture Recognition System

  • Kim, Jun-Ho;Lim, Ji-Hyoun;Moon, Sung-Hyun
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.4
    • /
    • pp.551-556
    • /
    • 2012
  • Objective: This study presents the effect of visual feedback on one-hand gesture performance in vision-based gesture recognition system when people use gestures to control a screen device remotely. Backgroud: gesture interaction receives growing attention because it uses advanced sensor technology and it allows users natural interaction using their own body motion. In generating motion, visual feedback has been to considered critical factor affect speed and accuracy. Method: three types of visual feedback(arrow, star, and animation) were selected and 20 gestures were listed. 12 participants perform each 20 gestures while given 3 types of visual feedback in turn. Results: People made longer hand trace and take longer time to make a gesture when they were given arrow shape feedback than star-shape feedback. The animation type feedback was most preferred. Conclusion: The type of visual feedback showed statistically significant effect on the length of hand trace, elapsed time, and speed of motion in performing a gesture. Application: This study could be applied to any device that needs visual feedback for device control. A big feedback generate shorter length of motion trace, less time, faster than smaller one when people performs gestures to control a device. So the big size of visual feedback would be recommended for a situation requiring fast actions. On the other hand, the smaller visual feedback would be recommended for a situation requiring elaborated actions.

The Effect of Gesture-Command Pairing Condition on Learnability when Interacting with TV

  • Jo, Chun-Ik;Lim, Ji-Hyoun;Park, Jun
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.4
    • /
    • pp.525-531
    • /
    • 2012
  • Objective: The aim of this study is to investigate learnability of gestures-commands pair when people use gestures to control a device. Background: In vision-based gesture recognition system, selecting gesture-command pairing is critical for its usability in learning. Subjective preference and its agreement score, used in previous study(Lim et al., 2012) was used to group four gesture-command pairings. To quantify the learnability, two learning models, average time model and marginal time model, were used. Method: Two sets of eight gestures, total sixteen gestures were listed by agreement score and preference data. Fourteen participants divided into two groups, memorized each set of gesture-command pair and performed gesture. For a given command, time to recall the paired gesture was collected. Results: The average recall time for initial trials were differed by preference and agreement score as well as the learning rate R driven by the two learning models. Conclusion: Preference rate agreement score showed influence on learning of gesture-command pairs. Application: This study could be applied to any device considered to adopt gesture interaction system for device control.

Dynamic Training Algorithm for Hand Gesture Recognition System (손동작 인식 시스템을 위한 동적 학습 알고리즘)

  • Kim, Moon-Hwan;hwang, suen ki;Bae, Cheol-Soo
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.2 no.2
    • /
    • pp.51-56
    • /
    • 2009
  • We developed an augmented new reality tool for vision-based hand gesture recognition in a camera-projector system. Our recognition method uses modified Fourier descriptors for the classification of static hand gestures. Hand segmentation is based on a background subtraction method, which is improved to handle background changes. Most of the recognition methods are trained and tested by the same service-person, and training phase occurs only preceding the interaction. However, there are numerous situations when several untrained users would like to use gestures for the interaction. In our new practical approach the correction of faulty detected gestures is done during the recognition itself. Our main result is the quick on-line adaptation to the gestures of a new user to achieve user-independent gesture recognition.

  • PDF

Hand Gesture Recognition Using an Infrared Proximity Sensor Array

  • Batchuluun, Ganbayar;Odgerel, Bayanmunkh;Lee, Chang Hoon
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.15 no.3
    • /
    • pp.186-191
    • /
    • 2015
  • Hand gesture is the most common tool used to interact with and control various electronic devices. In this paper, we propose a novel hand gesture recognition method using fuzzy logic based classification with a new type of sensor array. In some cases, feature patterns of hand gesture signals cannot be uniquely distinguished and recognized when people perform the same gesture in different ways. Moreover, differences in the hand shape and skeletal articulation of the arm influence to the process. Manifold features were extracted, and efficient features, which make gestures distinguishable, were selected. However, there exist similar feature patterns across different hand gestures, and fuzzy logic is applied to classify them. Fuzzy rules are defined based on the many feature patterns of the input signal. An adaptive neural fuzzy inference system was used to generate fuzzy rules automatically for classifying hand gestures using low number of feature patterns as input. In addition, emotion expression was conducted after the hand gesture recognition for resultant human-robot interaction. Our proposed method was tested with many hand gesture datasets and validated with different evaluation metrics. Experimental results show that our method detects more hand gestures as compared to the other existing methods with robust hand gesture recognition and corresponding emotion expressions, in real time.

Dynamic Training Algorithm for Hand Gesture Recognition System (손동작 인식 시스템을 위한 동적 학습 알고리즘)

  • Bae, Cheol-Soo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.11 no.7
    • /
    • pp.1348-1353
    • /
    • 2007
  • We developed an augmented new reality tool for vision-based hand gesture recognition in a camera-projector system. Our recognition method uses modified Fourier descriptors for the classification of static hand gestures. Hand segmentation is based on a background subtraction method, which is improved to handle background changes. Most of the recognition methods are trained and tested by the same service-person, and training phase occurs only preceding the interaction. However, there are numerous situations when several untrained users would like to use gestures for the interaction. In our new practical approach the correction of faulty detected gestures is done during the recognition itself. Our main result is the quick on-line adaptation to the gestures of a new user to achieve user-independent gesture recognition.

Dynamic Bayesian Network based Two-Hand Gesture Recognition (동적 베이스망 기반의 양손 제스처 인식)

  • Suk, Heung-Il;Sin, Bong-Kee
    • Journal of KIISE:Software and Applications
    • /
    • v.35 no.4
    • /
    • pp.265-279
    • /
    • 2008
  • The idea of using hand gestures for human-computer interaction is not new and has been studied intensively during the last dorado with a significant amount of qualitative progress that, however, has been short of our expectations. This paper describes a dynamic Bayesian network or DBN based approach to both two-hand gestures and one-hand gestures. Unlike wired glove-based approaches, the success of camera-based methods depends greatly on the image processing and feature extraction results. So the proposed method of DBN-based inference is preceded by fail-safe steps of skin extraction and modeling, and motion tracking. Then a new gesture recognition model for a set of both one-hand and two-hand gestures is proposed based on the dynamic Bayesian network framework which makes it easy to represent the relationship among features and incorporate new information to a model. In an experiment with ten isolated gestures, we obtained the recognition rate upwards of 99.59% with cross validation. The proposed model and the related approach are believed to have a strong potential for successful applications to other related problems such as sign languages.