• Title/Summary/Keyword: Gesture based interface

검색결과 168건 처리시간 0.022초

A Study on Developmental Direction of Interface Design for Gesture Recognition Technology

  • Lee, Dong-Min;Lee, Jeong-Ju
    • 대한인간공학회지
    • /
    • 제31권4호
    • /
    • pp.499-505
    • /
    • 2012
  • Objective: Research on the transformation of interaction between mobile machines and users through analysis on current gesture interface technology development trend. Background: For smooth interaction between machines and users, interface technology has evolved from "command line" to "mouse", and now "touch" and "gesture recognition" have been researched and being used. In the future, the technology is destined to evolve into "multi-modal", the fusion of the visual and auditory senses and "3D multi-modal", where three dimensional virtual world and brain waves are being used. Method: Within the development of computer interface, which follows the evolution of mobile machines, actively researching gesture interface and related technologies' trend and development will be studied comprehensively. Through investigation based on gesture based information gathering techniques, they will be separated in four categories: sensor, touch, visual, and multi-modal gesture interfaces. Each category will be researched through technology trend and existing actual examples. Through this methods, the transformation of mobile machine and human interaction will be studied. Conclusion: Gesture based interface technology realizes intelligent communication skill on interaction relation ship between existing static machines and users. Thus, this technology is important element technology that will transform the interaction between a man and a machine more dynamic. Application: The result of this study may help to develop gesture interface design currently in use.

Conditions of Applications, Situations and Functions Applicable to Gesture Interface

  • Ryu, Tae-Beum;Lee, Jae-Hong;Song, Joo-Bong;Yun, Myung-Hwan
    • 대한인간공학회지
    • /
    • 제31권4호
    • /
    • pp.507-513
    • /
    • 2012
  • Objective: This study developed a hierarchy of conditions of applications(devices), situations and functions which are applicable to gesture interface. Background: Gesture interface is one of the promising interfaces for our natural and intuitive interaction with intelligent machines and environments. Although there were many studies related to developing new gesture-based devices and gesture interfaces, it was little known which applications, situations and functions are applicable to gesture interface. Method: This study searched about 120 papers relevant to designing and applying gesture interfaces and vocabulary to find the gesture applicable conditions of applications, situations and functions. The conditions which were extracted from 16 closely-related papers were rearranged, and a hierarchy of them was developed to evaluate the applicability of applications, situations and functions to gesture interface. Results: This study summarized 10, 10 and 6 conditions of applications, situations and functions, respectively. In addition, the gesture applicable condition hierarchy of applications, situation and functions were developed based on the semantic similarity, ordering and serial or parallel relationship among them. Conclusion: This study collected gesture applicable conditions of application, situation and functions, and a hierarchy of them was developed to evaluate the applicability of gesture interface. Application: The gesture applicable conditions and hierarchy can be used in developing a framework and detailed criteria to evaluate applicability of applications situations and functions. Moreover, it can enable for designers of gesture interface and vocabulary to determine applications, situations and functions which are applicable to gesture interface.

A Development of Gesture Interfaces using Spatial Context Information

  • Kwon, Doo-Young;Bae, Ki-Tae
    • International Journal of Contents
    • /
    • 제7권1호
    • /
    • pp.29-36
    • /
    • 2011
  • Gestures have been employed for human computer interaction to build more natural interface in new computational environments. In this paper, we describe our approach to develop a gesture interface using spatial context information. The proposed gesture interface recognizes a system action (e.g. commands) by integrating gesture information with spatial context information within a probabilistic framework. Two ontologies of spatial contexts are introduced based on the spatial information of gestures: gesture volume and gesture target. Prototype applications are developed using a smart environment scenario that a user can interact with digital information embedded to physical objects using gestures.

가상현실 학습환경에서 동작기반 인터페이스가 실재감 지각 및 수행에 미치는 효과 (The Effect of Gesture Based Interface on Presence Perception and Performance in the Virtual Reality Learning Environment)

  • 류지헌;유승범
    • 한국교육학연구
    • /
    • 제23권1호
    • /
    • pp.35-56
    • /
    • 2017
  • 이 연구는 가상현실 학습공간에서 동작인식 인터페이스의 적용효과를 검증하기 위한 것이다. 동작인식 인터페이스는 사용자의 동작을 인식해서 작동하는 방식이므로 기존의 인터페이스와는 달리 신체적인 움직임을 자연스럽게 표현한다는 장점을 갖고 있다. 이러한 특징 때문에 동작인식 인터페이스가 가상현실과 같은 실감형 학습환경에서 실제로 긍정적인 효과를 나타내는지 검증하기 위하여 이 연구가 수행되었다. 특히, 동작기반 인터페이스가 가상현실 학습공간의 실감형 디스플레이와 함께 사용될 때 어떤 영향을 미치는지를 검증하였다. 이 연구를 44명의 대학생이 참여했으며 디스플레이의 실감성 수준(착용형 vs. 모니터)과 동작기반 인터페이스의 적용여부(동작기반 vs. 조이스틱)에 따른 적용효과를 검증했다. 연구결과에 따르면 공간구조가 적용되지 않은 학습내용에서는 동작기반 인터페이스가 긍정적인 영향을 미치는 것으로 나타났다. 반면에 공간구조가 적용된 학습내용에서는 조이스틱을 활용하는 것이 더 효과적이었다. 또한 착용형 디스플레이와 같이 실감성이 높은 매체와 함께 동작기반 인터페이스를 함께 활용한다고 하더라도 실감성을 더 높이지 않는 것으로 나타났다. 이 연구에서는 동작기반 인터페이스의 장점과 가상현실 학습공간에서 어떻게 활용될 수 있을 것인지에 대해서 논의하였다.

Study on Gesture and Voice-based Interaction in Perspective of a Presentation Support Tool

  • Ha, Sang-Ho;Park, So-Young;Hong, Hye-Soo;Kim, Nam-Hun
    • 대한인간공학회지
    • /
    • 제31권4호
    • /
    • pp.593-599
    • /
    • 2012
  • Objective: This study aims to implement a non-contact gesture-based interface for presentation purposes and to analyze the effect of the proposed interface as information transfer assisted device. Background: Recently, research on control device using gesture recognition or speech recognition is being conducted with rapid technological growth in UI/UX area and appearance of smart service products which requires a new human-machine interface. However, few quantitative researches on practical effects of the new interface type have been done relatively, while activities on system implementation are very popular. Method: The system presented in this study is implemented with KINECT$^{(R)}$ sensor offered by Microsoft Corporation. To investigate whether the proposed system is effective as a presentation support tool or not, we conduct experiments by giving several lectures to 40 participants in both a traditional lecture room(keyboard-based presentation control) and a non-contact gesture-based lecture room(KINECT-based presentation control), evaluating their interests and immersion based on contents of the lecture and lecturing methods, and analyzing their understanding about contents of the lecture. Result: We check that whether the gesture-based presentation system can play effective role as presentation supporting tools or not depending on the level of difficulty of contents using ANOVA. Conclusion: We check that a non-contact gesture-based interface is a meaningful tool as a sportive device when delivering easy and simple information. However, the effect can vary with the contents and the level of difficulty of information provided. Application: The results presented in this paper might help to design a new human-machine(computer) interface for communication support tools.

A Framework for Designing Closed-loop Hand Gesture Interface Incorporating Compatibility between Human and Monocular Device

  • Lee, Hyun-Soo;Kim, Sang-Ho
    • 대한인간공학회지
    • /
    • 제31권4호
    • /
    • pp.533-540
    • /
    • 2012
  • Objective: This paper targets a framework of a hand gesture based interface design. Background: While a modeling of contact-based interfaces has focused on users' ergonomic interface designs and real-time technologies, an implementation of a contactless interface needs error-free classifications as an essential prior condition. These trends made many research studies concentrate on the designs of feature vectors, learning models and their tests. Even though there have been remarkable advances in this field, the ignorance of ergonomics and users' cognitions result in several problems including a user's uneasy behaviors. Method: In order to incorporate compatibilities considering users' comfortable behaviors and device's classification abilities simultaneously, classification-oriented gestures are extracted using the suggested human-hand model and closed-loop classification procedures. Out of the extracted gestures, the compatibility-oriented gestures are acquired though human's ergonomic and cognitive experiments. Then, the obtained hand gestures are converted into a series of hand behaviors - Handycon - which is mapped into several functions in a mobile device. Results: This Handycon model guarantees users' easy behavior and helps fast understandings as well as the high classification rate. Conclusion and Application: The suggested framework contributes to develop a hand gesture-based contactless interface model considering compatibilities between human and device. The suggested procedures can be applied effectively into other contactless interface designs.

주거 공간에서의 3차원 핸드 제스처 인터페이스에 대한 사용자 요구사항 (User Needs of Three Dimensional Hand Gesture Interfaces in Residential Environment Based on Diary Method)

  • 정동영;김희진;한성호;이동훈
    • 대한산업공학회지
    • /
    • 제41권5호
    • /
    • pp.461-469
    • /
    • 2015
  • The aim of this study is to find out the user's needs of a 3D hand gesture interface in the smart home environment. To find out the users' needs, we investigated which object the users want to use with a 3D hand gesture interface and why they want to use a 3D hand gesture interface. 3D hand gesture interfaces are studied to be applied to various devices in the smart environment. 3D hand gesture interfaces enable the users to control the smart environment with natural and intuitive hand gestures. With these advantages, finding out the user's needs of a 3D hand gesture interface would improve the user experience of a product. This study was conducted using a diary method to find out the user's needs with 20 participants. They wrote the needs of a 3D hand gesture interface during one week filling in the forms of a diary. The form of the diary is comprised of who, when, where, what and how to use a 3D hand gesture interface with each consisting of a usefulness score. A total of 322 data (209 normal data and 113 error data) were collected from users. There were some common objects which the users wanted to control with a 3D hand gesture interface and reasons why they want to use a 3D hand gesture interface. Among them, the users wanted to use a 3D hand gesture interface mostly to control the light, and to use a 3D hand gesture interface mostly to overcome hand restrictions. The results of this study would help develop effective and efficient studies of a 3D hand gesture interface giving valuable insights for the researchers and designers. In addition, this could be used for creating guidelines for 3D hand gesture interfaces.

손 제스처 인식에 기반한 Virtual Block 게임 인터페이스 (Virtual Block Game Interface based on the Hand Gesture Recognition)

  • 윤민호;김윤제;김태영
    • 한국게임학회 논문지
    • /
    • 제17권6호
    • /
    • pp.113-120
    • /
    • 2017
  • 최근 가상현실 기술의 발전으로 가상의 3D 객체와 자연스러운 상호작용이 가능하도록 하는 사용자 친화적인 손 제스처 인터페이스에 대한 연구가 활발히 진행되고 있다. 그러나 대부분의 연구는 단순하고 적은 종류의 손 제스처만 지원되고 있는 실정이다. 본 논문은 가상환경에서 3D 객체와 보다 직관적인 방식의 손 제스처 인터페이스 방법을 제안한다. 손 제스처 인식을 위하여 먼저 전처리 과정을 거친 다양한 손 데이터를 이진 결정트리로 1차 분류를 한다. 분류된 데이터는 리샘플링을 한 다음 체인코드를 생성하고 이에 대한 히스토그램으로 특징 데이터를 구성한다. 이를 기반으로 학습된 MCSVM을 통해 2차 분류를 수행하여 제스처를 인식한다. 본 방법의 검증을 위하여 3D 블록을 손 제스처를 통하여 조작하는 'Virtual Block'이라는 게임을 구현하여 실험한 결과 16개의 제스처에 대해 99.2%의 인식률을 보였으며 기존의 인터페이스보다 직관적이고 사용자 친화적임을 알 수 있었다.

Gesture based Natural User Interface for e-Training

  • Lim, C.J.;Lee, Nam-Hee;Jeong, Yun-Guen;Heo, Seung-Il
    • 대한인간공학회지
    • /
    • 제31권4호
    • /
    • pp.577-583
    • /
    • 2012
  • Objective: This paper describes the process and results related to the development of gesture recognition-based natural user interface(NUI) for vehicle maintenance e-Training system. Background: E-Training refers to education training that acquires and improves the necessary capabilities to perform tasks by using information and communication technology(simulation, 3D virtual reality, and augmented reality), device(PC, tablet, smartphone, and HMD), and environment(wired/wireless internet and cloud computing). Method: Palm movement from depth camera is used as a pointing device, where finger movement is extracted by using OpenCV library as a selection protocol. Results: The proposed NUI allows trainees to control objects, such as cars and engines, on a large screen through gesture recognition. In addition, it includes the learning environment to understand the procedure of either assemble or disassemble certain parts. Conclusion: Future works are related to the implementation of gesture recognition technology for a multiple number of trainees. Application: The results of this interface can be applied not only in e-Training system, but also in other systems, such as digital signage, tangible game, controlling 3D contents, etc.

Study on User Interface for a Capacitive-Sensor Based Smart Device

  • Jung, Sun-IL;Kim, Young-Chul
    • 스마트미디어저널
    • /
    • 제8권3호
    • /
    • pp.47-52
    • /
    • 2019
  • In this paper, we designed HW / SW interfaces for processing the signals of capacitive sensors like Electric Potential Sensor (EPS) to detect the surrounding electric field disturbance as feature signals in motion recognition systems. We implemented a smart light control system with those interfaces. In the system, the on/off switch and brightness adjustment are controlled by hand gestures using the designed and fabricated interface circuits. PWM (Pulse Width Modulation) signals of the controller with a driver IC are used to drive the LED and to control the brightness and on/off operation. Using the hand-gesture signals obtained through EPS sensors and the interface HW/SW, we can not only construct a gesture instructing system but also accomplish the faster recognition speed by developing dedicated interface hardware including control circuitry. Finally, using the proposed hand-gesture recognition and signal processing methods, the light control module was also designed and implemented. The experimental result shows that the smart light control system can control the LED module properly by accurate motion detection and gesture classification.