• Title/Summary/Keyword: Gesture-based interaction

Search Result 151, Processing Time 0.04 seconds

A Study of Pattern-based Gesture Interaction in Tabletop Environments (테이블탑 환경에서 패턴 기반의 제스처 인터렉션 방법 연구)

  • Kim, Gun-Hee;Cho, Hyun-Chul;Pei, Wen-Hua;Ha, Sung-Do;Park, Ji-Hyung
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.696-700
    • /
    • 2009
  • In this paper, we present a framework which enables users to interact naturally with hand gestures on a digital table. In general tabletop applications, one gesture is mapped to one function or command. Therefore, users should know these relations, and make predefined gestures as input. In contrast, users can make input gesture without cognitive load in our system. Instead of burdening users, the system possesses knowledge about gesture interaction, and infers proactively users' gestures and intentions. When users make a gesture on the digital surface, the system begins to analyze the gestures and designs the response according to users' intention.

  • PDF

Conditions of Applications, Situations and Functions Applicable to Gesture Interface

  • Ryu, Tae-Beum;Lee, Jae-Hong;Song, Joo-Bong;Yun, Myung-Hwan
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.4
    • /
    • pp.507-513
    • /
    • 2012
  • Objective: This study developed a hierarchy of conditions of applications(devices), situations and functions which are applicable to gesture interface. Background: Gesture interface is one of the promising interfaces for our natural and intuitive interaction with intelligent machines and environments. Although there were many studies related to developing new gesture-based devices and gesture interfaces, it was little known which applications, situations and functions are applicable to gesture interface. Method: This study searched about 120 papers relevant to designing and applying gesture interfaces and vocabulary to find the gesture applicable conditions of applications, situations and functions. The conditions which were extracted from 16 closely-related papers were rearranged, and a hierarchy of them was developed to evaluate the applicability of applications, situations and functions to gesture interface. Results: This study summarized 10, 10 and 6 conditions of applications, situations and functions, respectively. In addition, the gesture applicable condition hierarchy of applications, situation and functions were developed based on the semantic similarity, ordering and serial or parallel relationship among them. Conclusion: This study collected gesture applicable conditions of application, situation and functions, and a hierarchy of them was developed to evaluate the applicability of gesture interface. Application: The gesture applicable conditions and hierarchy can be used in developing a framework and detailed criteria to evaluate applicability of applications situations and functions. Moreover, it can enable for designers of gesture interface and vocabulary to determine applications, situations and functions which are applicable to gesture interface.

HSFE Network and Fusion Model based Dynamic Hand Gesture Recognition

  • Tai, Do Nhu;Na, In Seop;Kim, Soo Hyung
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.9
    • /
    • pp.3924-3940
    • /
    • 2020
  • Dynamic hand gesture recognition(d-HGR) plays an important role in human-computer interaction(HCI) system. With the growth of hand-pose estimation as well as 3D depth sensors, depth, and the hand-skeleton dataset is proposed to bring much research in depth and 3D hand skeleton approaches. However, it is still a challenging problem due to the low resolution, higher complexity, and self-occlusion. In this paper, we propose a hand-shape feature extraction(HSFE) network to produce robust hand-shapes. We build a hand-shape model, and hand-skeleton based on LSTM to exploit the temporal information from hand-shape and motion changes. Fusion between two models brings the best accuracy in dynamic hand gesture (DHG) dataset.

Accelerometer-based Gesture Recognition for Robot Interface (로봇 인터페이스 활용을 위한 가속도 센서 기반 제스처 인식)

  • Jang, Min-Su;Cho, Yong-Suk;Kim, Jae-Hong;Sohn, Joo-Chan
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.1
    • /
    • pp.53-69
    • /
    • 2011
  • Vision and voice-based technologies are commonly utilized for human-robot interaction. But it is widely recognized that the performance of vision and voice-based interaction systems is deteriorated by a large margin in the real-world situations due to environmental and user variances. Human users need to be very cooperative to get reasonable performance, which significantly limits the usability of the vision and voice-based human-robot interaction technologies. As a result, touch screens are still the major medium of human-robot interaction for the real-world applications. To empower the usability of robots for various services, alternative interaction technologies should be developed to complement the problems of vision and voice-based technologies. In this paper, we propose the use of accelerometer-based gesture interface as one of the alternative technologies, because accelerometers are effective in detecting the movements of human body, while their performance is not limited by environmental contexts such as lighting conditions or camera's field-of-view. Moreover, accelerometers are widely available nowadays in many mobile devices. We tackle the problem of classifying acceleration signal patterns of 26 English alphabets, which is one of the essential repertoires for the realization of education services based on robots. Recognizing 26 English handwriting patterns based on accelerometers is a very difficult task to take over because of its large scale of pattern classes and the complexity of each pattern. The most difficult problem that has been undertaken which is similar to our problem was recognizing acceleration signal patterns of 10 handwritten digits. Most previous studies dealt with pattern sets of 8~10 simple and easily distinguishable gestures that are useful for controlling home appliances, computer applications, robots etc. Good features are essential for the success of pattern recognition. To promote the discriminative power upon complex English alphabet patterns, we extracted 'motion trajectories' out of input acceleration signal and used them as the main feature. Investigative experiments showed that classifiers based on trajectory performed 3%~5% better than those with raw features e.g. acceleration signal itself or statistical figures. To minimize the distortion of trajectories, we applied a simple but effective set of smoothing filters and band-pass filters. It is well known that acceleration patterns for the same gesture is very different among different performers. To tackle the problem, online incremental learning is applied for our system to make it adaptive to the users' distinctive motion properties. Our system is based on instance-based learning (IBL) where each training sample is memorized as a reference pattern. Brute-force incremental learning in IBL continuously accumulates reference patterns, which is a problem because it not only slows down the classification but also downgrades the recall performance. Regarding the latter phenomenon, we observed a tendency that as the number of reference patterns grows, some reference patterns contribute more to the false positive classification. Thus, we devised an algorithm for optimizing the reference pattern set based on the positive and negative contribution of each reference pattern. The algorithm is performed periodically to remove reference patterns that have a very low positive contribution or a high negative contribution. Experiments were performed on 6500 gesture patterns collected from 50 adults of 30~50 years old. Each alphabet was performed 5 times per participant using $Nintendo{(R)}$ $Wii^{TM}$ remote. Acceleration signal was sampled in 100hz on 3 axes. Mean recall rate for all the alphabets was 95.48%. Some alphabets recorded very low recall rate and exhibited very high pairwise confusion rate. Major confusion pairs are D(88%) and P(74%), I(81%) and U(75%), N(88%) and W(100%). Though W was recalled perfectly, it contributed much to the false positive classification of N. By comparison with major previous results from VTT (96% for 8 control gestures), CMU (97% for 10 control gestures) and Samsung Electronics(97% for 10 digits and a control gesture), we could find that the performance of our system is superior regarding the number of pattern classes and the complexity of patterns. Using our gesture interaction system, we conducted 2 case studies of robot-based edutainment services. The services were implemented on various robot platforms and mobile devices including $iPhone^{TM}$. The participating children exhibited improved concentration and active reaction on the service with our gesture interface. To prove the effectiveness of our gesture interface, a test was taken by the children after experiencing an English teaching service. The test result showed that those who played with the gesture interface-based robot content marked 10% better score than those with conventional teaching. We conclude that the accelerometer-based gesture interface is a promising technology for flourishing real-world robot-based services and content by complementing the limits of today's conventional interfaces e.g. touch screen, vision and voice.

Hand Gesture Recognition Using an Infrared Proximity Sensor Array

  • Batchuluun, Ganbayar;Odgerel, Bayanmunkh;Lee, Chang Hoon
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.15 no.3
    • /
    • pp.186-191
    • /
    • 2015
  • Hand gesture is the most common tool used to interact with and control various electronic devices. In this paper, we propose a novel hand gesture recognition method using fuzzy logic based classification with a new type of sensor array. In some cases, feature patterns of hand gesture signals cannot be uniquely distinguished and recognized when people perform the same gesture in different ways. Moreover, differences in the hand shape and skeletal articulation of the arm influence to the process. Manifold features were extracted, and efficient features, which make gestures distinguishable, were selected. However, there exist similar feature patterns across different hand gestures, and fuzzy logic is applied to classify them. Fuzzy rules are defined based on the many feature patterns of the input signal. An adaptive neural fuzzy inference system was used to generate fuzzy rules automatically for classifying hand gestures using low number of feature patterns as input. In addition, emotion expression was conducted after the hand gesture recognition for resultant human-robot interaction. Our proposed method was tested with many hand gesture datasets and validated with different evaluation metrics. Experimental results show that our method detects more hand gestures as compared to the other existing methods with robust hand gesture recognition and corresponding emotion expressions, in real time.

Dynamic Training Algorithm for Hand Gesture Recognition System (손동작 인식 시스템을 위한 동적 학습 알고리즘)

  • Kim, Moon-Hwan;hwang, suen ki;Bae, Cheol-Soo
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.2 no.2
    • /
    • pp.51-56
    • /
    • 2009
  • We developed an augmented new reality tool for vision-based hand gesture recognition in a camera-projector system. Our recognition method uses modified Fourier descriptors for the classification of static hand gestures. Hand segmentation is based on a background subtraction method, which is improved to handle background changes. Most of the recognition methods are trained and tested by the same service-person, and training phase occurs only preceding the interaction. However, there are numerous situations when several untrained users would like to use gestures for the interaction. In our new practical approach the correction of faulty detected gestures is done during the recognition itself. Our main result is the quick on-line adaptation to the gestures of a new user to achieve user-independent gesture recognition.

  • PDF

Dynamic Training Algorithm for Hand Gesture Recognition System (손동작 인식 시스템을 위한 동적 학습 알고리즘)

  • Bae, Cheol-Soo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.11 no.7
    • /
    • pp.1348-1353
    • /
    • 2007
  • We developed an augmented new reality tool for vision-based hand gesture recognition in a camera-projector system. Our recognition method uses modified Fourier descriptors for the classification of static hand gestures. Hand segmentation is based on a background subtraction method, which is improved to handle background changes. Most of the recognition methods are trained and tested by the same service-person, and training phase occurs only preceding the interaction. However, there are numerous situations when several untrained users would like to use gestures for the interaction. In our new practical approach the correction of faulty detected gestures is done during the recognition itself. Our main result is the quick on-line adaptation to the gestures of a new user to achieve user-independent gesture recognition.

Tracking and Recognizing Hand Gestures using Kalman Filter and Continuous Dynamic Programming (연속DP와 칼만필터를 이용한 손동작의 추적 및 인식)

  • 문인혁;금영광
    • Proceedings of the IEEK Conference
    • /
    • 2002.06c
    • /
    • pp.13-16
    • /
    • 2002
  • This paper proposes a method to track hand gesture and to recognize the gesture pattern using Kalman filter and continuous dynamic programming (CDP). The positions of hands are predicted by Kalman filter, and corresponding pixels to the hands are extracted by skin color filter. The center of gravity of the hands is the same as the input pattern vector. The input gesture is then recognized by matching with the reference gesture patterns using CDP. From experimental results to recognize circle shape gesture and intention gestures such as “Come on” and “Bye-bye”, we show the proposed method is feasible to the hand gesture-based human -computer interaction.

  • PDF

Development of Emotion-Based Human Interaction Method for Intelligent Robot (지능형 로봇을 위한 감성 기반 휴먼 인터액션 기법 개발)

  • Joo, Young-Hoon;So, Jea-Yun;Sim, Kee-Bo;Song, Min-Kook;Park, Jin-Bae
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.16 no.5
    • /
    • pp.587-593
    • /
    • 2006
  • This paper is to present gesture analysis for human-robot interaction. Understanding human emotions through gesture is one of the necessary skills for the computers to interact intelligently with their human counterparts. Gesture analysis is consisted of several processes such as detecting of hand, extracting feature, and recognizing emotions. For efficient operation we used recognizing a gesture with HMM(Hidden Markov Model). We constructed a large gesture database, with which we verified our method. As a result, our method is successfully included and operated in a mobile system.

Comparative Study on the Interface and Interaction for Manipulating 3D Virtual Objects in a Virtual Reality Environment (가상현실 환경에서 3D 가상객체 조작을 위한 인터페이스와 인터랙션 비교 연구)

  • Park, Kyeong-Beom;Lee, Jae Yeol
    • Korean Journal of Computational Design and Engineering
    • /
    • v.21 no.1
    • /
    • pp.20-30
    • /
    • 2016
  • Recently immersive virtual reality (VR) becomes popular due to the advanced development of I/O interfaces and related SWs for effectively constructing VR environments. In particular, natural and intuitive manipulation of 3D virtual objects is still considered as one of the most important user interaction issues. This paper presents a comparative study on the manipulation and interaction of 3D virtual objects using different interfaces and interactions in three VR environments. The comparative study includes both quantitative and qualitative aspects. Three different experimental setups are 1) typical desktop-based VR using mouse and keyboard, 2) hand gesture-supported desktop VR using a Leap Motion sensor, and 3) immersive VR by wearing an HMD with hand gesture interaction using a Leap Motion sensor. In the desktop VR with hand gestures, the Leap Motion sensor is put on the desk. On the other hand, in the immersive VR, the sensor is mounted on the HMD so that the user can manipulate virtual objects in the front of the HMD. For the quantitative analysis, a task completion time and success rate were measured. Experimental tasks require complex 3D transformation such as simultaneous 3D translation and 3D rotation. For the qualitative analysis, various factors relating to user experience such as ease of use, natural interaction, and stressfulness were evaluated. The qualitative and quantitative analyses show that the immersive VR with the natural hand gesture provides more intuitive and natural interactions, supports fast and effective performance on task completion, but causes stressful condition.