• Title/Summary/Keyword: Gesture based interface

Search Result 168, Processing Time 0.03 seconds

RealBook: A Tangible Electronic Book Based on the Interface of TouchFace-V (RealBook: TouchFace-V 인터페이스 기반 실감형 전자책)

  • Song, Dae-Hyeon;Bae, Ki-Tae;Lee, Chil-Woo
    • The Journal of the Korea Contents Association
    • /
    • v.13 no.12
    • /
    • pp.551-559
    • /
    • 2013
  • In this paper, we proposed a tangible RealBook based on the interface of TouchFace-V which is able to recognize multi-touch and hand gesture. The TouchFace-V is applied projection technology on a flat surface such as table, without constraint of space. The system's configuration is addressed installation, calibration, and portability issues that are most existing front-projected vision-based tabletop display. It can provide hand touch and gesture applying computer vision by adopting tracking technology without sensor and traditional input device. The RealBook deals with the combination of each advantage of analog sensibility on texts and multimedia effects of e-book. Also, it provides digitally created stories that would differ in experiences and environments with interacting users' choices on the interface of the book. We proposed e-book that is new concept of electronic book; named RealBook, different from existing and TouchFace-V interface, which can provide more direct viewing, natural and intuitive interactions with hand touch and gesture.

Object Detection Using Predefined Gesture and Tracking (약속된 제스처를 이용한 객체 인식 및 추적)

  • Bae, Dae-Hee;Yi, Joon-Hwan
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.10
    • /
    • pp.43-53
    • /
    • 2012
  • In the this paper, a gesture-based user interface based on object detection using predefined gesture and the tracking of the detected object is proposed. For object detection, moving objects in a frame are computed by comparing multiple previous frames and predefined gesture is used to detect the target object among those moving objects. Any object with the predefined gesture can be used to control. We also propose an object tracking algorithm, namely density based meanshift algorithm, that uses color distribution of the target objects. The proposed object tracking algorithm tracks a target object crossing the background with a similar color more accurately than existing techniques. Experimental results show that the proposed object detection and tracking algorithms achieve higher detection capability with less computational complexity.

A Comparison of the Characteristics between Single and Double Finger Gestures for Web Browsers

  • Park, Jae-Kyu;Lim, Young-Jae;Jung, Eui-S.
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.5
    • /
    • pp.629-636
    • /
    • 2012
  • Objective: The purpose of this study is to compare the characteristics of single and double finger gestures related on the web browser and to extract the appropriate finger gestures. Background: As electronic equipment emphasizes miniaturization for improving portability various interfaces are being developed as input devices. Electronic devices are made smaller, the gesture recognition technology using the touch-based interface is favored for easy editing. In addition, user focus primarily on the simplicity of intuitive interfaces which propels further research of gesture based interfaces. In particular, the fingers in these intuitive interfaces are simple and fast which are users friendly. Recently, the single and double finger gestures are becoming more popular so more applications for these gestures are being developed. However, systems and software that employ such finger gesture lack consistency in addition to having unclear standard and guideline development. Method: In order to learn the application of these gestures, we performed the sketch map method which happens to be a method for memory elicitation. In addition, we used the MIMA(Meaning in Mediated Action) method to evaluate gesture interface. Results: This study created appropriate gestures for intuitive judgment. We conducted a usability test which consisted of single and double finger gestures. The results showed that double finger gestures had less performance time faster than single finger gestures. Single finger gestures are a wide satisfaction difference between similar type and difference type. That is, single finger gestures can judge intuitively in a similar type but it is difficult to associate functions in difference type. Conclusion: This study was found that double finger gesture was effective to associate functions for web navigations. Especially, this double finger gesture could be effective on associating complex forms such as curve shaped gestures. Application: This study aimed to facilitate the design products which utilized finger and hand gestures.

HMM-based Upper-body Gesture Recognition for Virtual Playing Ground Interface (가상 놀이 공간 인터페이스를 위한 HMM 기반 상반신 제스처 인식)

  • Park, Jae-Wan;Oh, Chi-Min;Lee, Chil-Woo
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.8
    • /
    • pp.11-17
    • /
    • 2010
  • In this paper, we propose HMM-based upper-body gesture. First, to recognize gesture of space, division about pose that is composing gesture once should be put priority. In order to divide poses which using interface, we used two IR cameras established on front side and side. So we can divide and acquire in front side pose and side pose about one pose in each IR camera. We divided the acquired IR pose image using SVM's non-linear RBF kernel function. If we use RBF kernel, we can divide misclassification between non-linear classification poses. Like this, sequences of divided poses is recognized by gesture using HMM's state transition matrix. The recognized gesture can apply to existent application to do mapping to OS Value.

Morphological Hand-Gesture Recognition Algorithm (형태론적 손짓 인식 알고리즘)

  • Choi Jong-Ho
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.8 no.8
    • /
    • pp.1725-1731
    • /
    • 2004
  • The use of gestures provides an attractive alternate to cumbersome interface devices for human-computer interaction. This has motivated a very active research area concerned with computer vision-based analysis and interpretation of hand gestures. The most important issues in gesture recognition are the simplification of algorithm and the reduction of processing time. The mathematical morphology based on geometrical set theory is best used to perform the processing. A key idea of proposed algorithm in this paper is to apply morphological shape decomposition. The primitive elements extracted to a hand gesture include in very important information on the directivity of the hand gestures. Based on this characteristic, we proposed the morphological gesture recognition algorithm using feature vectors calculated to lines connecting the center points of a main-primitive element and sub-primitive elements. Through the experiment, we demonstrated the efficiency of proposed algorithm. Coupling natural interactions such as hand gesture with an appropriately designed interface is a valuable and powerful component in the building of TV switch navigating and video contents browsing system.

Design of Multi-Finger Flick Interface for Fast File Management on Capacitive-Touch-Sensor Device (정전기식 입력 장치에서의 빠른 파일 관리를 위한 다중 손가락 튕김 인터페이스 설계)

  • Park, Se-Hyun;Park, Tae-Jin;Choy, Yoon-Chul
    • Journal of Korea Multimedia Society
    • /
    • v.13 no.8
    • /
    • pp.1235-1244
    • /
    • 2010
  • Most emerging smart phones support capacitive touch sensors. It renders existing gesture-based interfaces not suitable since they were developed for the resistive touch sensors and pen-based input. Unlike the flick gestures from the existing gesture interfaces, the finger flick gesture used in this paper reduces the workload about half by selecting the target and the command to perform on the target at a single touch input. With the combination with multi-touch interface, it supports various menu commands without having to learn complex gestures, and is suitable for the touch-based devices hence it minimizes input error. This research designs and implements the multi-touch and flick interface to provide an effective file management system on the smart phones with capacitive touch input. The evaluation proves that the suggested interface is superior to the existing methods on the capacitive touch input devices.

A Study on Tangible Gesture Interface Prototype Development of the Quiz Game (퀴즈게임의 체감형 제스처 인터페이스 프로토타입 개발)

  • Ahn, Jung-Ho;Ko, Jae-Pil
    • Journal of Digital Contents Society
    • /
    • v.13 no.2
    • /
    • pp.235-245
    • /
    • 2012
  • This paper introduce a quiz game contents based on gesture interface. We analyzed the off-line quiz games, extracted its presiding components, and digitalized them so that the proposed game contents is able to substitute for the off-line quiz games. We used the Kinect camera to obtain the depth images and performed the preprocessing including vertical human segmentation, head detection and tracking and hand detection, and gesture recognition for hand-up, hand vertical movement, fist shape, pass and fist-and-attraction. Especially, we defined the interface gestures designed as a metaphor for natural gestures in real world so that users are able to feel abstract concept of movement, selection and confirmation tangibly. Compared to our previous work, we added the card compensation process for completeness, improved the vertical hand movement and the fist shape recognition methods for the example selection and presented an organized test to measure the recognition performance. The implemented quiz application program was tested in real time and showed very satisfactory gesture recognition results.

Hand Gesture Interface for Manipulating 3D Objects in Augmented Reality (증강현실에서 3D 객체 조작을 위한 손동작 인터페이스)

  • Park, Keon-Hee;Lee, Guee-Sang
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.5
    • /
    • pp.20-28
    • /
    • 2010
  • In this paper, we propose a hand gesture interface for the manipulation of augmented objects in 3D space using a camera. Generally a marker is used for the detection of 3D movement in 2D images. However marker based system has obvious defects since markers are always to be included in the image or we need additional equipments for controling objects, which results in reduced immersion. To overcome this problem, we replace marker by planar hand shape by estimating the hand pose. Kalman filter is for robust tracking of the hand shape. The experimental result indicates the feasibility of the proposed algorithm for hand based AR interfaces.

Accelerometer-based Gesture Recognition for Robot Interface (로봇 인터페이스 활용을 위한 가속도 센서 기반 제스처 인식)

  • Jang, Min-Su;Cho, Yong-Suk;Kim, Jae-Hong;Sohn, Joo-Chan
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.1
    • /
    • pp.53-69
    • /
    • 2011
  • Vision and voice-based technologies are commonly utilized for human-robot interaction. But it is widely recognized that the performance of vision and voice-based interaction systems is deteriorated by a large margin in the real-world situations due to environmental and user variances. Human users need to be very cooperative to get reasonable performance, which significantly limits the usability of the vision and voice-based human-robot interaction technologies. As a result, touch screens are still the major medium of human-robot interaction for the real-world applications. To empower the usability of robots for various services, alternative interaction technologies should be developed to complement the problems of vision and voice-based technologies. In this paper, we propose the use of accelerometer-based gesture interface as one of the alternative technologies, because accelerometers are effective in detecting the movements of human body, while their performance is not limited by environmental contexts such as lighting conditions or camera's field-of-view. Moreover, accelerometers are widely available nowadays in many mobile devices. We tackle the problem of classifying acceleration signal patterns of 26 English alphabets, which is one of the essential repertoires for the realization of education services based on robots. Recognizing 26 English handwriting patterns based on accelerometers is a very difficult task to take over because of its large scale of pattern classes and the complexity of each pattern. The most difficult problem that has been undertaken which is similar to our problem was recognizing acceleration signal patterns of 10 handwritten digits. Most previous studies dealt with pattern sets of 8~10 simple and easily distinguishable gestures that are useful for controlling home appliances, computer applications, robots etc. Good features are essential for the success of pattern recognition. To promote the discriminative power upon complex English alphabet patterns, we extracted 'motion trajectories' out of input acceleration signal and used them as the main feature. Investigative experiments showed that classifiers based on trajectory performed 3%~5% better than those with raw features e.g. acceleration signal itself or statistical figures. To minimize the distortion of trajectories, we applied a simple but effective set of smoothing filters and band-pass filters. It is well known that acceleration patterns for the same gesture is very different among different performers. To tackle the problem, online incremental learning is applied for our system to make it adaptive to the users' distinctive motion properties. Our system is based on instance-based learning (IBL) where each training sample is memorized as a reference pattern. Brute-force incremental learning in IBL continuously accumulates reference patterns, which is a problem because it not only slows down the classification but also downgrades the recall performance. Regarding the latter phenomenon, we observed a tendency that as the number of reference patterns grows, some reference patterns contribute more to the false positive classification. Thus, we devised an algorithm for optimizing the reference pattern set based on the positive and negative contribution of each reference pattern. The algorithm is performed periodically to remove reference patterns that have a very low positive contribution or a high negative contribution. Experiments were performed on 6500 gesture patterns collected from 50 adults of 30~50 years old. Each alphabet was performed 5 times per participant using $Nintendo{(R)}$ $Wii^{TM}$ remote. Acceleration signal was sampled in 100hz on 3 axes. Mean recall rate for all the alphabets was 95.48%. Some alphabets recorded very low recall rate and exhibited very high pairwise confusion rate. Major confusion pairs are D(88%) and P(74%), I(81%) and U(75%), N(88%) and W(100%). Though W was recalled perfectly, it contributed much to the false positive classification of N. By comparison with major previous results from VTT (96% for 8 control gestures), CMU (97% for 10 control gestures) and Samsung Electronics(97% for 10 digits and a control gesture), we could find that the performance of our system is superior regarding the number of pattern classes and the complexity of patterns. Using our gesture interaction system, we conducted 2 case studies of robot-based edutainment services. The services were implemented on various robot platforms and mobile devices including $iPhone^{TM}$. The participating children exhibited improved concentration and active reaction on the service with our gesture interface. To prove the effectiveness of our gesture interface, a test was taken by the children after experiencing an English teaching service. The test result showed that those who played with the gesture interface-based robot content marked 10% better score than those with conventional teaching. We conclude that the accelerometer-based gesture interface is a promising technology for flourishing real-world robot-based services and content by complementing the limits of today's conventional interfaces e.g. touch screen, vision and voice.

Gesture Recognition and Motion Evaluation Using Appearance Information of Pose in Parametric Gesture Space (파라메트릭 제스처 공간에서 포즈의 외관 정보를 이용한 제스처 인식과 동작 평가)

  • Lee, Chil-Woo;Lee, Yong-Jae
    • Journal of Korea Multimedia Society
    • /
    • v.7 no.8
    • /
    • pp.1035-1045
    • /
    • 2004
  • In this paper, we describe a method that can recognize gestures and evaluate the degree of the gestures from sequential gesture images by using Gesture Feature Space. The previous popular methods based on HMM and neural network have difficulties in recognizing the degree of gesture even though it can classify gesture into some kinds. However, our proposed method can recognize not only posture but also the degree information of the gestures, such as speed and magnitude by calculating distance among the position vectors substituting input and model images in parametric eigenspace. This method which can be applied in various applications such as intelligent interface systems and surveillance systems is a simple and robust recognition algorithm.

  • PDF