• Title/Summary/Keyword: Gesture-Based User Interface

Search Result 107, Processing Time 0.022 seconds

A Development of Gesture Interfaces using Spatial Context Information

  • Kwon, Doo-Young;Bae, Ki-Tae
    • International Journal of Contents
    • /
    • v.7 no.1
    • /
    • pp.29-36
    • /
    • 2011
  • Gestures have been employed for human computer interaction to build more natural interface in new computational environments. In this paper, we describe our approach to develop a gesture interface using spatial context information. The proposed gesture interface recognizes a system action (e.g. commands) by integrating gesture information with spatial context information within a probabilistic framework. Two ontologies of spatial contexts are introduced based on the spatial information of gestures: gesture volume and gesture target. Prototype applications are developed using a smart environment scenario that a user can interact with digital information embedded to physical objects using gestures.

User Needs of Three Dimensional Hand Gesture Interfaces in Residential Environment Based on Diary Method (주거 공간에서의 3차원 핸드 제스처 인터페이스에 대한 사용자 요구사항)

  • Jeong, Dong Yeong;Kim, Heejin;Han, Sung H.;Lee, Donghun
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.41 no.5
    • /
    • pp.461-469
    • /
    • 2015
  • The aim of this study is to find out the user's needs of a 3D hand gesture interface in the smart home environment. To find out the users' needs, we investigated which object the users want to use with a 3D hand gesture interface and why they want to use a 3D hand gesture interface. 3D hand gesture interfaces are studied to be applied to various devices in the smart environment. 3D hand gesture interfaces enable the users to control the smart environment with natural and intuitive hand gestures. With these advantages, finding out the user's needs of a 3D hand gesture interface would improve the user experience of a product. This study was conducted using a diary method to find out the user's needs with 20 participants. They wrote the needs of a 3D hand gesture interface during one week filling in the forms of a diary. The form of the diary is comprised of who, when, where, what and how to use a 3D hand gesture interface with each consisting of a usefulness score. A total of 322 data (209 normal data and 113 error data) were collected from users. There were some common objects which the users wanted to control with a 3D hand gesture interface and reasons why they want to use a 3D hand gesture interface. Among them, the users wanted to use a 3D hand gesture interface mostly to control the light, and to use a 3D hand gesture interface mostly to overcome hand restrictions. The results of this study would help develop effective and efficient studies of a 3D hand gesture interface giving valuable insights for the researchers and designers. In addition, this could be used for creating guidelines for 3D hand gesture interfaces.

Gesture based Natural User Interface for e-Training

  • Lim, C.J.;Lee, Nam-Hee;Jeong, Yun-Guen;Heo, Seung-Il
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.4
    • /
    • pp.577-583
    • /
    • 2012
  • Objective: This paper describes the process and results related to the development of gesture recognition-based natural user interface(NUI) for vehicle maintenance e-Training system. Background: E-Training refers to education training that acquires and improves the necessary capabilities to perform tasks by using information and communication technology(simulation, 3D virtual reality, and augmented reality), device(PC, tablet, smartphone, and HMD), and environment(wired/wireless internet and cloud computing). Method: Palm movement from depth camera is used as a pointing device, where finger movement is extracted by using OpenCV library as a selection protocol. Results: The proposed NUI allows trainees to control objects, such as cars and engines, on a large screen through gesture recognition. In addition, it includes the learning environment to understand the procedure of either assemble or disassemble certain parts. Conclusion: Future works are related to the implementation of gesture recognition technology for a multiple number of trainees. Application: The results of this interface can be applied not only in e-Training system, but also in other systems, such as digital signage, tangible game, controlling 3D contents, etc.

A Study on Gesture Interface through User Experience (사용자 경험을 통한 제스처 인터페이스에 관한 연구)

  • Yoon, Ki Tae;Cho, Eel Hea;Lee, Jooyoup
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.7 no.6
    • /
    • pp.839-849
    • /
    • 2017
  • Recently, the role of the kitchen has evolved from the space for previous survival to the space that shows the present life and culture. Along with these changes, the use of IoT technology is spreading. As a result, the development and diffusion of new smart devices in the kitchen is being achieved. The user experience for using these smart devices is also becoming important. For a natural interaction between a user and a computer, better interactions can be expected based on context awareness. This paper examines the Natural User Interface (NUI) that does not touch the device based on the user interface (UI) of the smart device used in the kitchen. In this method, we use the image processing technology to recognize the user's hand gesture using the camera attached to the device and apply the recognized hand shape to the interface. The gestures used in this study are proposed to gesture according to the user's context and situation, and 5 kinds of gestures are classified and used in the interface.

Implementation of Gesture Interface for Projected Surfaces

  • Park, Yong-Suk;Park, Se-Ho;Kim, Tae-Gon;Chung, Jong-Moon
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.1
    • /
    • pp.378-390
    • /
    • 2015
  • Image projectors can turn any surface into a display. Integrating a surface projection with a user interface transforms it into an interactive display with many possible applications. Hand gesture interfaces are often used with projector-camera systems. Hand detection through color image processing is affected by the surrounding environment. The lack of illumination and color details greatly influences the detection process and drops the recognition success rate. In addition, there can be interference from the projection system itself due to image projection. In order to overcome these problems, a gesture interface based on depth images is proposed for projected surfaces. In this paper, a depth camera is used for hand recognition and for effectively extracting the area of the hand from the scene. A hand detection and finger tracking method based on depth images is proposed. Based on the proposed method, a touch interface for the projected surface is implemented and evaluated.

Virtual Block Game Interface based on the Hand Gesture Recognition (손 제스처 인식에 기반한 Virtual Block 게임 인터페이스)

  • Yoon, Min-Ho;Kim, Yoon-Jae;Kim, Tae-Young
    • Journal of Korea Game Society
    • /
    • v.17 no.6
    • /
    • pp.113-120
    • /
    • 2017
  • With the development of virtual reality technology, in recent years, user-friendly hand gesture interface has been more studied for natural interaction with a virtual 3D object. Most earlier studies on the hand-gesture interface are using relatively simple hand gestures. In this paper, we suggest an intuitive hand gesture interface for interaction with 3D object in the virtual reality applications. For hand gesture recognition, first of all, we preprocess various hand data and classify the data through the binary decision tree. The classified data is re-sampled and converted to the chain-code, and then constructed to the hand feature data with the histograms of the chain code. Finally, the input gesture is recognized by MCSVM-based machine learning from the feature data. To test our proposed hand gesture interface we implemented a 'Virtual Block' game. Our experiments showed about 99.2% recognition ratio of 16 kinds of command gestures and more intuitive and user friendly than conventional mouse interface.

Recognition-Based Gesture Spotting for Video Game Interface (비디오 게임 인터페이스를 위한 인식 기반 제스처 분할)

  • Han, Eun-Jung;Kang, Hyun;Jung, Kee-Chul
    • Journal of Korea Multimedia Society
    • /
    • v.8 no.9
    • /
    • pp.1177-1186
    • /
    • 2005
  • In vision-based interfaces for video games, gestures are used as commands of the games instead of pressing down a keyboard or a mouse. In these Interfaces, unintentional movements and continuous gestures have to be permitted to give a user more natural interface. For this problem, this paper proposes a novel gesture spotting method that combines spotting with recognition. It recognizes the meaningful movements concurrently while separating unintentional movements from a given image sequence. We applied our method to the recognition of the upper-body gestures for interfacing between a video game (Quake II) and its user. Experimental results show that the proposed method is on average $93.36\%$ in spotting gestures from continuous gestures, confirming its potential for a gesture-based interface for computer games.

  • PDF

Design and Implementation of e-Commerce User Authentication Interface using the Mouse Gesture (마우스 제스처를 이용한 전자상거래 사용자 인증 인터페이스)

  • 김은영;정옥란;조동섭
    • Journal of Korea Multimedia Society
    • /
    • v.6 no.3
    • /
    • pp.469-480
    • /
    • 2003
  • The accurate user- authentication technology is being raised as one of the most important in this current society, which is, so called, information society. Most authentication technology is used to identify users by using the special characteristics of users. This paper has established an e-commerce shopping mall based on conventional e-commerce systems. It also suggested and established the user authentication interface that uses the mouse gesture, which is the new authentication of what users have. The user authentication interface using the mouse gesture generates the status of recognition directly on the screen by comparing the stored pattern values with the unique pattern values that users entered. When users purchase products through the shopping mall and enter their another signature information together with payment information, security can be more increased. Experimental results show that our mouse gesture interface may be useful to provide more security to e-commerce server.

  • PDF

A Framework for Designing Closed-loop Hand Gesture Interface Incorporating Compatibility between Human and Monocular Device

  • Lee, Hyun-Soo;Kim, Sang-Ho
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.4
    • /
    • pp.533-540
    • /
    • 2012
  • Objective: This paper targets a framework of a hand gesture based interface design. Background: While a modeling of contact-based interfaces has focused on users' ergonomic interface designs and real-time technologies, an implementation of a contactless interface needs error-free classifications as an essential prior condition. These trends made many research studies concentrate on the designs of feature vectors, learning models and their tests. Even though there have been remarkable advances in this field, the ignorance of ergonomics and users' cognitions result in several problems including a user's uneasy behaviors. Method: In order to incorporate compatibilities considering users' comfortable behaviors and device's classification abilities simultaneously, classification-oriented gestures are extracted using the suggested human-hand model and closed-loop classification procedures. Out of the extracted gestures, the compatibility-oriented gestures are acquired though human's ergonomic and cognitive experiments. Then, the obtained hand gestures are converted into a series of hand behaviors - Handycon - which is mapped into several functions in a mobile device. Results: This Handycon model guarantees users' easy behavior and helps fast understandings as well as the high classification rate. Conclusion and Application: The suggested framework contributes to develop a hand gesture-based contactless interface model considering compatibilities between human and device. The suggested procedures can be applied effectively into other contactless interface designs.

Design of Hand Gestures for Smart Home Appliances based on a User Centered Approach (스마트홈 내 기기와의 상호작용을 위한 사용자 중심의 핸드 제스처 도출)

  • Choi, Eun-Jung;Kwon, Sung-Hyuk;Lee, Dong-Hun;Lee, Ho-Jin;Chung, Min-K.
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.38 no.3
    • /
    • pp.182-190
    • /
    • 2012
  • With the progress of both wire and wireless home networking technology, various projects on smart home have been carried out in the world (Harper, 2003), and at the same time, new approaches to interact with smart home systems efficiently and effectively have also been investigated. A gesture-based interface is one of these approaches. Especially with advance of gesture recognition technologies, a variety of research studies on gesture interactions with the functions of IT devices have been conducted. However, there are few research studies which suggested and investigated the use of gestures for controlling smart home appliances. In this research the gestures for selected smart home appliances are suggested based on a user centered approach. A total of thirty-eight functions were selected, and a total of thirty participants generated gestures for each function. Based on the Nielsen (2004), Lee et al. (2010) and Kuhnel et al. (2011), the gesture with the highest frequency for each function (Top gesture) has been suggested and investigated.