• Title/Summary/Keyword: Gesture-based interaction

Search Result 152, Processing Time 0.029 seconds

Study on Gesture and Voice-based Interaction in Perspective of a Presentation Support Tool

  • Ha, Sang-Ho;Park, So-Young;Hong, Hye-Soo;Kim, Nam-Hun
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.4
    • /
    • pp.593-599
    • /
    • 2012
  • Objective: This study aims to implement a non-contact gesture-based interface for presentation purposes and to analyze the effect of the proposed interface as information transfer assisted device. Background: Recently, research on control device using gesture recognition or speech recognition is being conducted with rapid technological growth in UI/UX area and appearance of smart service products which requires a new human-machine interface. However, few quantitative researches on practical effects of the new interface type have been done relatively, while activities on system implementation are very popular. Method: The system presented in this study is implemented with KINECT$^{(R)}$ sensor offered by Microsoft Corporation. To investigate whether the proposed system is effective as a presentation support tool or not, we conduct experiments by giving several lectures to 40 participants in both a traditional lecture room(keyboard-based presentation control) and a non-contact gesture-based lecture room(KINECT-based presentation control), evaluating their interests and immersion based on contents of the lecture and lecturing methods, and analyzing their understanding about contents of the lecture. Result: We check that whether the gesture-based presentation system can play effective role as presentation supporting tools or not depending on the level of difficulty of contents using ANOVA. Conclusion: We check that a non-contact gesture-based interface is a meaningful tool as a sportive device when delivering easy and simple information. However, the effect can vary with the contents and the level of difficulty of information provided. Application: The results presented in this paper might help to design a new human-machine(computer) interface for communication support tools.

Augmented Reality Authoring Tool with Marker & Gesture Interactive Features (마커 및 제스처 상호작용이 가능한 증강현실 저작도구)

  • Shim, Jinwook;Kong, Minje;Kim, Hayoung;Chae, Seungho;Jeong, Kyungho;Seo, Jonghoon;Han, Tack-Don
    • Journal of Korea Multimedia Society
    • /
    • v.16 no.6
    • /
    • pp.720-734
    • /
    • 2013
  • In this paper, we suggest an augmented reality authoring tool system that users can easily make augmented reality contents using hand gesture and marker-based interaction methods. The previous augmented reality authoring tools are focused on augmenting a virtual object and to interact with this kind of augmented reality contents, user used the method utilizing marker or sensor. We want to solve this limited interaction method problem by applying marker based interaction method and gesture interaction method using depth sensing camera, Kinect. In this suggested system, user can easily develop simple form of marker based augmented reality contents through interface. Also, not just providing fragmentary contents, this system provides methods that user can actively interact with augmented reality contents. This research provides two interaction methods, one is marker based method using two markers and the other is utilizing marker occlusion. In addition, by recognizing and tracking user's bare hand, this system provides gesture interaction method which can zoom-in, zoom-out, move and rotate object. From heuristic evaluation about authoring tool and compared usability about marker and gesture interaction, this study confirmed a positive result.

Design and Implementation of a Stereoscopic Image Control System based on User Hand Gesture Recognition (사용자 손 제스처 인식 기반 입체 영상 제어 시스템 설계 및 구현)

  • Song, Bok Deuk;Lee, Seung-Hwan;Choi, HongKyw;Kim, Sung-Hoon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.3
    • /
    • pp.396-402
    • /
    • 2022
  • User interactions are being developed in various forms, and in particular, interactions using human gestures are being actively studied. Among them, hand gesture recognition is used as a human interface in the field of realistic media based on the 3D Hand Model. The use of interfaces based on hand gesture recognition helps users access media media more easily and conveniently. User interaction using hand gesture recognition should be able to view images by applying fast and accurate hand gesture recognition technology without restrictions on the computer environment. This paper developed a fast and accurate user hand gesture recognition algorithm using the open source media pipe framework and machine learning's k-NN (K-Nearest Neighbor). In addition, in order to minimize the restriction of the computer environment, a stereoscopic image control system based on user hand gesture recognition was designed and implemented using a web service environment capable of Internet service and a docker container, a virtual environment.

Development for Multi-modal Realistic Experience I/O Interaction System (멀티모달 실감 경험 I/O 인터랙션 시스템 개발)

  • Park, Jae-Un;Whang, Min-Cheol;Lee, Jung-Nyun;Heo, Hwan;Jeong, Yong-Mu
    • Science of Emotion and Sensibility
    • /
    • v.14 no.4
    • /
    • pp.627-636
    • /
    • 2011
  • The purpose of this study is to develop the multi-modal interaction system. This system provides realistic and an immersive experience through multi-modal interaction. The system recognizes user behavior, intention, and attention, which overcomes the limitations of uni-modal interaction. The multi-modal interaction system is based upon gesture interaction methods, intuitive gesture interaction and attention evaluation technology. The gesture interaction methods were based on the sensors that were selected to analyze the accuracy of the 3-D gesture recognition technology using meta-analysis. The elements of intuitive gesture interaction were reflected through the results of experiments. The attention evaluation technology was developed by the physiological signal analysis. This system is divided into 3 modules; a motion cognitive system, an eye gaze detecting system, and a bio-reaction sensing system. The first module is the motion cognitive system which uses the accelerator sensor and flexible sensors to recognize hand and finger movements of the user. The second module is an eye gaze detecting system that detects pupil movements and reactions. The final module consists of a bio-reaction sensing system or attention evaluating system which tracks cardiovascular and skin temperature reactions. This study will be used for the development of realistic digital entertainment technology.

  • PDF

Investigating Key User Experience Factors for Virtual Reality Interactions

  • Ahn, Junyoung;Choi, Seungho;Lee, Minjae;Kim, Kyungdoh
    • Journal of the Ergonomics Society of Korea
    • /
    • v.36 no.4
    • /
    • pp.267-280
    • /
    • 2017
  • Objective: The aim of this study is to investigate key user experience factors of interactions for Head Mounted Display (HMD) devices in the Virtual Reality Environment (VRE). Background: Virtual reality interaction research has been conducted steadily, while interaction methods and virtual reality devices have improved. Recently, all of the virtual reality devices are head mounted display based ones. Also, HMD-based interaction types include Remote Controller, Head Tracking, and Hand Gesture. However, there is few study on usability evaluation of virtual reality. Especially, the usability of HMD-based virtual reality was not investigated. Therefore, it is necessary to study the usability of HMD-based virtual reality. Method: HMD-based VR devices released recently have only three interaction types, 'Remote Controller', 'Head Tracking', and 'Hand Gesture'. We search 113 types of research to check the user experience factors or evaluation scales by interaction type. Finally, the key user experience factors or relevant evaluation scales are summarized considering the frequency used in the studies. Results: There are various key user experience factors by each interaction type. First, Remote controller's key user experience factors are 'Ease of learning', 'Ease of use', 'Satisfaction', 'Effectiveness', and 'Efficiency'. Also, Head tracking's key user experience factors are 'Sickness', 'Immersion', 'Intuitiveness', 'Stress', 'Fatigue', and 'Ease of learning'. Finally, Hand gesture's key user experience factors are 'Ease of learning', 'Ease of use', 'Feedback', 'Consistent', 'Simple', 'Natural', 'Efficiency', 'Responsiveness', 'Usefulness', 'Intuitiveness', and 'Adaptability'. Conclusion: We identified key user experience factors for each interaction type through literature review. However, we did not consider objective measures because each study adopted different performance factors. Application: The results of this study can be used when evaluating HMD-based interactions in virtual reality in terms of usability.

A Decision Tree based Real-time Hand Gesture Recognition Method using Kinect

  • Chang, Guochao;Park, Jaewan;Oh, Chimin;Lee, Chilwoo
    • Journal of Korea Multimedia Society
    • /
    • v.16 no.12
    • /
    • pp.1393-1402
    • /
    • 2013
  • Hand gesture is one of the most popular communication methods in everyday life. In human-computer interaction applications, hand gesture recognition provides a natural way of communication between humans and computers. There are mainly two methods of hand gesture recognition: glove-based method and vision-based method. In this paper, we propose a vision-based hand gesture recognition method using Kinect. By using the depth information is efficient and robust to achieve the hand detection process. The finger labeling makes the system achieve pose classification according to the finger name and the relationship between each fingers. It also make the classification more effective and accutate. Two kinds of gesture sets can be recognized by our system. According to the experiment, the average accuracy of American Sign Language(ASL) number gesture set is 94.33%, and that of general gestures set is 95.01%. Since our system runs in real-time and has a high recognition rate, we can embed it into various applications.

Motion-Understanding Cell Phones for Intelligent User Interaction and Entertainment (지능형 UI와 Entertainment를 위한 동작 이해 휴대기기)

  • Cho, Sung-Jung;Choi, Eun-Seok;Bang, Won-Chul;Yang, Jing;Cho, Joon-Kee;Ki, Eun-Kwang;Sohn, Jun-Il;Kim, Dong-Yoon;Kim, Sang-Ryong
    • 한국HCI학회:학술대회논문집
    • /
    • 2006.02a
    • /
    • pp.684-691
    • /
    • 2006
  • As many functionalities such as cameras and MP3 players are converged to mobile phones, more intuitive and interesting interaction methods are essential. In this paper, we present applications and their enabling technologies for gesture interactive cell phones. They employ gesture recognition and real-time shake detection algorithm for supporting motion-based user interface and entertainment applications respectively. The gesture recognition algorithm classifies users' movement into one of predefined gestures by modeling basic components of acceleration signals and their relationships. The recognition performance is further enhanced by discriminating frequently confusing classes with support vector machines. The shake detection algorithm detects in real time the exact motion moment when the phone is shaken significantly by utilizing variance and mean of acceleration signals. The gesture interaction algorithms show reliable performance for commercialization; with 100 novice users, the average recognition rate was 96.9% on 11 gestures (digits 1-9, O, X) and users' movements were detected in real time. We have applied the motion understanding technologies to Samsung cell phones in Korean, American, Chinese and European markets since May 2005.

  • PDF

Hand Gesture Recognition Suitable for Wearable Devices using Flexible Epidermal Tactile Sensor Array

  • Byun, Sung-Woo;Lee, Seok-Pil
    • Journal of Electrical Engineering and Technology
    • /
    • v.13 no.4
    • /
    • pp.1732-1739
    • /
    • 2018
  • With the explosion of digital devices, interaction technologies between human and devices are required more than ever. Especially, hand gesture recognition is advantageous in that it can be easily used. It is divided into the two groups: the contact sensor and the non-contact sensor. Compared with non-contact gesture recognition, the advantage of contact gesture recognition is that it is able to classify gestures that disappear from the sensor's sight. Also, since there is direct contacted with the user, relatively accurate information can be acquired. Electromyography (EMG) and force-sensitive resistors (FSRs) are the typical methods used for contact gesture recognition based on muscle activities. The sensors, however, are generally too sensitive to environmental disturbances such as electrical noises, electromagnetic signals and so on. In this paper, we propose a novel contact gesture recognition method based on Flexible Epidermal Tactile Sensor Array (FETSA) that is used to measure electrical signals according to movements of the wrist. To recognize gestures using FETSA, we extracted feature sets, and the gestures were subsequently classified using the support vector machine. The performance of the proposed gesture recognition method is very promising in comparison with two previous non-contact and contact gesture recognition studies.

A method for image-based shadow interaction with virtual objects

  • Ha, Hyunwoo;Ko, Kwanghee
    • Journal of Computational Design and Engineering
    • /
    • v.2 no.1
    • /
    • pp.26-37
    • /
    • 2015
  • A lot of researchers have been investigating interactive portable projection systems such as a mini-projector. In addition, in exhibition halls and museums, there is a trend toward using interactive projection systems to make viewing more exciting and impressive. They can also be applied in the field of art, for example, in creating shadow plays. The key idea of the interactive portable projection systems is to recognize the user's gesture in real-time. In this paper, a vision-based shadow gesture recognition method is proposed for interactive projection systems. The gesture recognition method is based on the screen image obtained by a single web camera. The method separates only the shadow area by combining the binary image with an input image using a learning algorithm that isolates the background from the input image. The region of interest is recognized with labeling the shadow of separated regions, and then hand shadows are isolated using the defect, convex hull, and moment of each region. To distinguish hand gestures, Hu's invariant moment method is used. An optical flow algorithm is used for tracking the fingertip. Using this method, a few interactive applications are developed, which are presented in this paper.