• Title/Summary/Keyword: Gesture-based Interaction

Search Result 153, Processing Time 0.029 seconds

Hand gesture recognition for player control

  • Shi, Lan Yan;Kim, Jin-Gyu;Yeom, Dong-Hae;Joo, Young-Hoon
    • Proceedings of the KIEE Conference
    • /
    • 2011.07a
    • /
    • pp.1908-1909
    • /
    • 2011
  • Hand gesture recognition has been widely used in virtual reality and HCI (Human-Computer-Interaction) system, which is challenging and interesting subject in the vision based area. The existing approaches for vision-driven interactive user interfaces resort to technologies such as head tracking, face and facial expression recognition, eye tracking and gesture recognition. The purpose of this paper is to combine the finite state machine (FSM) and the gesture recognition method, in other to control Windows Media Player, such as: play/pause, next, pervious, and volume up/down.

  • PDF

Human-Object Interaction Framework Using RGB-D Camera (RGB-D 카메라를 사용한 사용자-실사물 상호작용 프레임워크)

  • Baeka, Yong-Hwan;Lim, Changmin;Park, Jong-Il
    • Journal of Broadcast Engineering
    • /
    • v.21 no.1
    • /
    • pp.11-23
    • /
    • 2016
  • Recent days, touch interaction interface is the most widely used interaction interface to communicate with digital devices. Because of its usability, touch technology is applied almost everywhere from watch to advertising boards and it is growing much bigger. However, this technology has a critical weakness. Normally, touch input device needs a contact surface with touch sensors embedded in it. Thus, touch interaction through general objects like books or documents are still unavailable. In this paper, a human-object interaction framework based on RGB-D camera is proposed to overcome those limitation. The proposed framework can deal with occluded situations like hovering the hand on top of the object and also moving objects by hand. In such situations object recognition algorithm and hand gesture algorithm may fail to recognize. However, our framework makes it possible to handle complicated circumstances without performance loss. The framework calculates the status of the object with fast and robust object recognition algorithm to determine whether it is an object or a human hand. Then, the hand gesture recognition algorithm controls the context of each object by gestures almost simultaneously.

Emotion Recognition Based on Human Gesture (인간의 제스쳐에 의한 감정 인식)

  • Song, Min-Kook;Park, Jin-Bae;Joo, Young-Hoon
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.17 no.1
    • /
    • pp.46-51
    • /
    • 2007
  • This paper is to present gesture analysis for human-robot interaction. Understanding human emotions through gesture is one of the necessary skills fo the computers to interact intelligently with their human counterparts. Gesture analysis is consisted of several processes such as detecting of hand, extracting feature, and recognizing emotions. For efficient operation we used recognizing a gesture with HMM(Hidden Markov Model). We constructed a large gesture database, with which we verified our method. As a result, our method is successfully included and operated in a mobile system.

Multi - Modal Interface Design for Non - Touch Gesture Based 3D Sculpting Task (비접촉식 제스처 기반 3D 조형 태스크를 위한 다중 모달리티 인터페이스 디자인 연구)

  • Son, Minji;Yoo, Seung Hun
    • Design Convergence Study
    • /
    • v.16 no.5
    • /
    • pp.177-190
    • /
    • 2017
  • This research aims to suggest a multimodal non-touch gesture interface design to improve the usability of 3D sculpting task. The task and procedure of design sculpting of users were analyzed across multiple circumstances from the physical sculpting to computer software. The optimal body posture, design process, work environment, gesture-task relationship, the combination of natural hand gesture and arm movement of designers were defined. The preliminary non-touch 3D S/W were also observed and natural gesture interaction, visual metaphor of UI and affordance for behavior guide were also designed. The prototype of gesture based 3D sculpting system were developed for validation of intuitiveness and learnability in comparison to the current S/W. The suggested gestures were proved with higher performance as a result in terms of understandability, memorability and error rate. Result of the research showed that the gesture interface design for productivity system should reflect the natural experience of users in previous work domain and provide appropriate visual - behavioral metaphor.

Technology Trends of Range Image based Gesture Recognition (거리영상 기반 동작인식 기술동향)

  • Chang, J.Y.;Ryu, M.W.;Park, S.C
    • Electronics and Telecommunications Trends
    • /
    • v.29 no.1
    • /
    • pp.11-20
    • /
    • 2014
  • 동작인식(gesture recognition) 기술은 입력 영상으로부터 영상에 포함된 사람들의 동작을 인식하는 기술로써 영상감시(visual surveillance), 사람-컴퓨터 상호작용(human-computer interaction), 지능로봇(intelligence robot) 등 다양한 적용분야를 가진다. 특히 최근에는 저비용의 거리 센서(range sensor) 및 효율적인 3차원 자세 추정(3D pose estimation)기술의 등장으로 동작인식은 기존의 어려움들을 극복하고 다양한 산업분야에 적용이 가능할 정도로 발전을 거듭하고 있다. 본고에서는 그러한 거리영상(range image) 기반의 동작인식 기술에 대한 최신 연구동향을 살펴본다.

  • PDF

Automatic Generation of Script-Based Robot Gesture and its Application to Steward Robot (스크립트 기반의 로봇 제스처 자동생성 방법 및 집사로봇에의 적용)

  • Kim, Heon-Hui;Lee, Hyong-Euk;Kim, Yong-Hwi;Park, Kwang-Hyun;Bien, Zeung-Nam
    • 한국HCI학회:학술대회논문집
    • /
    • 2007.02a
    • /
    • pp.688-693
    • /
    • 2007
  • 본 논문은 인간과 로봇간의 효과적인 상호작용을 위한 로봇제스쳐의 자동생성 기법을 다룬다. 이는 텍스트 정보 만의 입력으로 의미 있는 단어에 대응되는 특정 제스쳐패턴이 자동적으로 생성되도록 하는 기법으로서 이를 위한 사전조사로 제스쳐가 출현하는 발화시점에서의 단어수집이 우선적으로 요구되었다. 본 논문은 이러한 분석을 위해 두 개 이상의 연속된 제스쳐 패턴을 효과적으로 표현할 수 있는 제스쳐 모델을 제안한다. 또한 제안된 모델이 적용되어 구축된 제스쳐DB와 스크립트 기법을 이용한 로봇제스쳐 자동생성 방법을 제안한다. 제스쳐 생성시스템은 규칙기반의 제스쳐 선택부와 스크립트 기반의 동작 계획부로 구성되고, 집사로봇의 안내기능에 대한 모의실험을 통해 그 효용성을 확인한다.

  • PDF

Cognitive and Emotional Structure of a Robotic Game Player in Turn-based Interaction

  • Yang, Jeong-Yean
    • International journal of advanced smart convergence
    • /
    • v.4 no.2
    • /
    • pp.154-162
    • /
    • 2015
  • This paper focuses on how cognitive and emotional structures affect humans during long-term interaction. We design an interaction with a turn-based game, the Chopstick Game, in which two agents play with numbers using their fingers. While a human and a robot agent alternate turn, the human user applies herself to play the game and to learn new winning skills from the robot agent. Conventional valence and arousal space is applied to design emotional interaction. For the robotic system, we implement finger gesture recognition and emotional behaviors that are designed for three-dimensional virtual robot. In the experimental tests, the properness of the proposed schemes is verified and the effect of the emotional interaction is discussed.

A Unit Touch Gesture Model of Performance Time Prediction for Mobile Devices

  • Kim, Damee;Myung, Rohae
    • Journal of the Ergonomics Society of Korea
    • /
    • v.35 no.4
    • /
    • pp.277-291
    • /
    • 2016
  • Objective: The aim of this study is to propose a unit touch gesture model, which would be useful to predict the performance time on mobile devices. Background: When estimating usability based on Model-based Evaluation (MBE) in interfaces, the GOMS model measured 'operators' to predict the execution time in the desktop environment. Therefore, this study used the concept of operator in GOMS for touch gestures. Since the touch gestures are comprised of possible unit touch gestures, these unit touch gestures can predict to performance time with unit touch gestures on mobile devices. Method: In order to extract unit touch gestures, manual movements of subjects were recorded in the 120 fps with pixel coordinates. Touch gestures are classified with 'out of range', 'registration', 'continuation' and 'termination' of gesture. Results: As a results, six unit touch gestures were extracted, which are hold down (H), Release (R), Slip (S), Curved-stroke (Cs), Path-stroke (Ps) and Out of range (Or). The movement time predicted by the unit touch gesture model is not significantly different from the participants' execution time. The measured six unit touch gestures can predict movement time of undefined touch gestures like user-defined gestures. Conclusion: In conclusion, touch gestures could be subdivided into six unit touch gestures. Six unit touch gestures can explain almost all the current touch gestures including user-defined gestures. So, this model provided in this study has a high predictive power. The model presented in the study could be utilized to predict the performance time of touch gestures. Application: The unit touch gestures could be simply added up to predict the performance time without measuring the performance time of a new gesture.

Morphological Hand-Gesture Recognition Algorithm (형태론적 손짓 인식 알고리즘)

  • Choi Jong-Ho
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.8 no.8
    • /
    • pp.1725-1731
    • /
    • 2004
  • The use of gestures provides an attractive alternate to cumbersome interface devices for human-computer interaction. This has motivated a very active research area concerned with computer vision-based analysis and interpretation of hand gestures. The most important issues in gesture recognition are the simplification of algorithm and the reduction of processing time. The mathematical morphology based on geometrical set theory is best used to perform the processing. A key idea of proposed algorithm in this paper is to apply morphological shape decomposition. The primitive elements extracted to a hand gesture include in very important information on the directivity of the hand gestures. Based on this characteristic, we proposed the morphological gesture recognition algorithm using feature vectors calculated to lines connecting the center points of a main-primitive element and sub-primitive elements. Through the experiment, we demonstrated the efficiency of proposed algorithm. Coupling natural interactions such as hand gesture with an appropriately designed interface is a valuable and powerful component in the building of TV switch navigating and video contents browsing system.

A Measurement System for 3D Hand-Drawn Gesture with a PHANToMTM Device

  • Ko, Seong-Young;Bang, Won-Chul;Kim, Sang-Youn
    • Journal of Information Processing Systems
    • /
    • v.6 no.3
    • /
    • pp.347-358
    • /
    • 2010
  • This paper presents a measurement system for 3D hand-drawn gesture motion. Many pen-type input devices with Inertial Measurement Units (IMU) have been developed to estimate 3D hand-drawn gesture using the measured acceleration and/or the angular velocity of the device. The crucial procedure in developing these devices is to measure and to analyze their motion or trajectory. In order to verify the trajectory estimated by an IMU-based input device, it is necessary to compare the estimated trajectory to the real trajectory. For measuring the real trajectory of the pen-type device, a PHANToMTM haptic device is utilized because it allows us to measure the 3D motion of the object in real-time. Even though the PHANToMTM measures the position of the hand gesture well, poor initialization may produce a large amount of error. Therefore, this paper proposes a calibration method which can minimize measurement errors.