• Title/Summary/Keyword: Interaction Gesture

Search Result 226, Processing Time 0.025 seconds

Road Traffic Control Gesture Recognition using Depth Images

  • Le, Quoc Khanh;Pham, Chinh Huu;Le, Thanh Ha
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.1 no.1
    • /
    • pp.1-7
    • /
    • 2012
  • This paper presents a system used to automatically recognize the road traffic control gestures of police officers. In this approach,the control gestures of traffic police officers are captured in the form of depth images.A human skeleton is then constructed using a kinematic model. The feature vector describing a traffic control gesture is built from the relative angles found amongst the joints of the constructed human skeleton. We utilize Support Vector Machines (SVMs) to perform the gesture recognition. Experiments show that our proposed method is robust and efficient and is suitable for real-time application. We also present a testbed system based on the SVMs trained data for real-time traffic gesture recognition.

  • PDF

Automatic Gesture Recognition for Human-Machine Interaction: An Overview

  • Nataliia, Konkina
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.1
    • /
    • pp.129-138
    • /
    • 2022
  • With the increasing reliance of computing systems in our everyday life, there is always a constant need to improve the ways users can interact with such systems in a more natural, effective, and convenient way. In the initial computing revolution, the interaction between the humans and machines have been limited. The machines were not necessarily meant to be intelligent. This begged for the need to develop systems that could automatically identify and interpret our actions. Automatic gesture recognition is one of the popular methods users can control systems with their gestures. This includes various kinds of tracking including the whole body, hands, head, face, etc. We also touch upon a different line of work including Brain-Computer Interface (BCI), Electromyography (EMG) as potential additions to the gesture recognition regime. In this work, we present an overview of several applications of automated gesture recognition systems and a brief look at the popular methods employed.

Interacting with Touchless Gestures: Taxonomy and Requirements

  • Kim, Huhn
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.4
    • /
    • pp.475-481
    • /
    • 2012
  • Objective: The aim of this study is to make the taxonomy for classifying diverse touchless gestures and establish the design requirements that should be considered in determining suitable gestures during gesture-based interaction design. Background: Recently, the applicability of touchless gestures is more and more increasing as relevant technologies are being advanced. However, before touchless gestures are widely applied to various devices or systems, the understanding on human gestures' natures and their standardization should be prerequisite. Method: In this study, diverse gesture types in various literatures were collected and, based on those, a new taxonomy for classifying touchless gestures was proposed. And many gesture-based interaction design cases and studies were analyzed. Results: The proposed taxonomy consisted of two dimensions: shape (deictic, manipulative, semantic, or descriptive) and motion(static or dynamic). The case analysis based on the taxonomy showed that manipulative and dynamic gestures were widely applied. Conclusion: Four core requirements for valuable touchless gestures were intuitiveness, learnability, convenience and discriminability. Application: The gesture taxonomy can be applied to produce alternatives of applicable touchless gestures, and four design requirements can be used as the criteria for evaluating the alternatives.

The Effect of Gesture-Command Pairing Condition on Learnability when Interacting with TV

  • Jo, Chun-Ik;Lim, Ji-Hyoun;Park, Jun
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.4
    • /
    • pp.525-531
    • /
    • 2012
  • Objective: The aim of this study is to investigate learnability of gestures-commands pair when people use gestures to control a device. Background: In vision-based gesture recognition system, selecting gesture-command pairing is critical for its usability in learning. Subjective preference and its agreement score, used in previous study(Lim et al., 2012) was used to group four gesture-command pairings. To quantify the learnability, two learning models, average time model and marginal time model, were used. Method: Two sets of eight gestures, total sixteen gestures were listed by agreement score and preference data. Fourteen participants divided into two groups, memorized each set of gesture-command pair and performed gesture. For a given command, time to recall the paired gesture was collected. Results: The average recall time for initial trials were differed by preference and agreement score as well as the learning rate R driven by the two learning models. Conclusion: Preference rate agreement score showed influence on learning of gesture-command pairs. Application: This study could be applied to any device considered to adopt gesture interaction system for device control.

A Development of Gesture Interfaces using Spatial Context Information

  • Kwon, Doo-Young;Bae, Ki-Tae
    • International Journal of Contents
    • /
    • v.7 no.1
    • /
    • pp.29-36
    • /
    • 2011
  • Gestures have been employed for human computer interaction to build more natural interface in new computational environments. In this paper, we describe our approach to develop a gesture interface using spatial context information. The proposed gesture interface recognizes a system action (e.g. commands) by integrating gesture information with spatial context information within a probabilistic framework. Two ontologies of spatial contexts are introduced based on the spatial information of gestures: gesture volume and gesture target. Prototype applications are developed using a smart environment scenario that a user can interact with digital information embedded to physical objects using gestures.

An Outlook for Interaction Experience in Next-generation Television

  • Kim, Sung-Woo
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.4
    • /
    • pp.557-565
    • /
    • 2012
  • Objective: This paper focuses on the new trend of applying NUI(natural user interface) such as gesture interaction into television and investigates on the design improvement needed in application. The intention is to find better design direction of NUI on television context, which will contribute to making new features and behavioral changes occurring in next-generation television more practically usable and meaningful use experience elements. Background: Traditional television is rapidly evolving into next-generation television thanks to the influence of "smartness" from mobile domain. A number of new features and behavioral changes occurred from such evolution are on their way to be characterized as the new experience elements of next-generation television. Method: A series of expert review by television UX professionals based on AHP (Analytic Hierarchy Process) was conducted to check on the "relative appropriateness" of applying gesture interaction to a number of selected television user experience scenarios. Conclusion: It is critical not to indiscriminately apply new interaction techniques like gesture into television. It may be effective in demonstrating new technology but generally results in poor user experience. It is imperative to conduct consistent validation of its practical appropriateness in real context. Application: The research will be helpful in applying gesture interaction in next-generation television to bring optimal user experience in.

Design of Gaming Interaction Control using Gesture Recognition and VR Control in FPS Game (FPS 게임에서 제스처 인식과 VR 컨트롤러를 이용한 게임 상호 작용 제어 설계)

  • Lee, Yong-Hwan;Ahn, Hyochang
    • Journal of the Semiconductor & Display Technology
    • /
    • v.18 no.4
    • /
    • pp.116-119
    • /
    • 2019
  • User interface/experience and realistic game manipulation play an important role in virtual reality first-person-shooting game. This paper presents an intuitive hands-free interface of gaming interaction scheme for FPS based on user's gesture recognition and VR controller. We focus on conventional interface of VR FPS interaction, and design the player interaction wearing head mounted display with two motion controllers; leap motion to handle low-level physics interaction and VIVE tracker to control movement of the player joints in the VR world. The FPS prototype system shows that the design interface helps to enjoy playing immersive FPS and gives players a new gaming experience.

The Effect of Visual Feedback on One-hand Gesture Performance in Vision-based Gesture Recognition System

  • Kim, Jun-Ho;Lim, Ji-Hyoun;Moon, Sung-Hyun
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.4
    • /
    • pp.551-556
    • /
    • 2012
  • Objective: This study presents the effect of visual feedback on one-hand gesture performance in vision-based gesture recognition system when people use gestures to control a screen device remotely. Backgroud: gesture interaction receives growing attention because it uses advanced sensor technology and it allows users natural interaction using their own body motion. In generating motion, visual feedback has been to considered critical factor affect speed and accuracy. Method: three types of visual feedback(arrow, star, and animation) were selected and 20 gestures were listed. 12 participants perform each 20 gestures while given 3 types of visual feedback in turn. Results: People made longer hand trace and take longer time to make a gesture when they were given arrow shape feedback than star-shape feedback. The animation type feedback was most preferred. Conclusion: The type of visual feedback showed statistically significant effect on the length of hand trace, elapsed time, and speed of motion in performing a gesture. Application: This study could be applied to any device that needs visual feedback for device control. A big feedback generate shorter length of motion trace, less time, faster than smaller one when people performs gestures to control a device. So the big size of visual feedback would be recommended for a situation requiring fast actions. On the other hand, the smaller visual feedback would be recommended for a situation requiring elaborated actions.

Improvement of Gesture Recognition using 2-stage HMM (2단계 히든마코프 모델을 이용한 제스쳐의 성능향상 연구)

  • Jung, Hwon-Jae;Park, Hyeonjun;Kim, Donghan
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.11
    • /
    • pp.1034-1037
    • /
    • 2015
  • In recent years in the field of robotics, various methods have been developed to create an intimate relationship between people and robots. These methods include speech, vision, and biometrics recognition as well as gesture-based interaction. These recognition technologies are used in various wearable devices, smartphones and other electric devices for convenience. Among these technologies, gesture recognition is the most commonly used and appropriate technology for wearable devices. Gesture recognition can be classified as contact or noncontact gesture recognition. This paper proposes contact gesture recognition with IMU and EMG sensors by using the hidden Markov model (HMM) twice. Several simple behaviors make main gestures through the one-stage HMM. It is equal to the Hidden Markov model process, which is well known for pattern recognition. Additionally, the sequence of the main gestures, which comes from the one-stage HMM, creates some higher-order gestures through the two-stage HMM. In this way, more natural and intelligent gestures can be implemented through simple gestures. This advanced process can play a larger role in gesture recognition-based UX for many wearable and smart devices.

Virtual Block Game Interface based on the Hand Gesture Recognition (손 제스처 인식에 기반한 Virtual Block 게임 인터페이스)

  • Yoon, Min-Ho;Kim, Yoon-Jae;Kim, Tae-Young
    • Journal of Korea Game Society
    • /
    • v.17 no.6
    • /
    • pp.113-120
    • /
    • 2017
  • With the development of virtual reality technology, in recent years, user-friendly hand gesture interface has been more studied for natural interaction with a virtual 3D object. Most earlier studies on the hand-gesture interface are using relatively simple hand gestures. In this paper, we suggest an intuitive hand gesture interface for interaction with 3D object in the virtual reality applications. For hand gesture recognition, first of all, we preprocess various hand data and classify the data through the binary decision tree. The classified data is re-sampled and converted to the chain-code, and then constructed to the hand feature data with the histograms of the chain code. Finally, the input gesture is recognized by MCSVM-based machine learning from the feature data. To test our proposed hand gesture interface we implemented a 'Virtual Block' game. Our experiments showed about 99.2% recognition ratio of 16 kinds of command gestures and more intuitive and user friendly than conventional mouse interface.