• Title/Summary/Keyword: 제스처 트레이닝

Search Result 3, Processing Time 0.026 seconds

A Study on Design and Implementation of Gesture Proposal System (제스처 제안 시스템의 설계 및 구현에 관한 연구)

  • Moon, Sung-Hyun;Yoon, Tae-Hyun;Hwang, In-Sung;Kim, Seok-Kyoo;Park, Jun;Han, Sang-Yong
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.10
    • /
    • pp.1311-1322
    • /
    • 2011
  • Gesture is applied in many applications such as smart-phone, tablet-PC, and web-browser since it is a fast and simple way to invoke commands. For gesture applications, a gesture designer needs to consider both user and system during designing gestures. In spite of development of gesture design tools, some difficulties for gesture design still remains as followings; first, a designer must design every gesture manually one by one, and, second, a designer must repeatedly train gestures. In this paper, we propose a gesture proposal system that automates gesture training and gesture generation to provide more simple gesture design environment. Using automation of gesture training, a designer does not need to manually train gestures. Proposed gesture proposal system would decrease difficulties of gesture design by suggesting gestures of high recognition possibility that are generated based on mahalanobis distance calculation among generated and pre-existing gestures.

A Study on Hand Gesture Classification Deep learning method device based on RGBD Image (RGBD 이미지 기반 핸드제스처 분류 딥러닝 기법의 연구)

  • Park, Jong-Chan;Li, Yan;Shin, Byeong-Seok
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2019.10a
    • /
    • pp.1173-1175
    • /
    • 2019
  • 소음이 심하거나 긴급한 상황 등에서 서로 다른 핸드제스처에 대한 인식을 컴퓨터의 입력으로 받고 이를 특정 명령으로 인식하는 등의 연구가 로봇 분야에서 연구되고 있다. 그러나 핸드제스처에 대한 전처리 과정에서 RGB데이터를 활용하거나 또는 스켈레톤을 활용하는 연구들이 다양하게 연구되었지만, 실생활에서의 노이즈가 많아 분류 정확도가 높지 않거나 컴퓨팅 파워의 사용이 과다한 문제가 발생했다. 본 논문에서는 RGBD 이미지를 사용하여 Hand Gesture를 트레이닝 받은 Keras 모델을 통해 입력받은 Hand Gesture을 분류하는 연구를 진행하였다. Depth Camera를 통하여 입력받은 Hand Gesture Raw-Data를 Image로 재구성하여 딥러닝을 진행하였다.

NUI/NUX of the Virtual Monitor Concept using the Concentration Indicator and the User's Physical Features (사용자의 신체적 특징과 뇌파 집중 지수를 이용한 가상 모니터 개념의 NUI/NUX)

  • Jeon, Chang-hyun;Ahn, So-young;Shin, Dong-il;Shin, Dong-kyoo
    • Journal of Internet Computing and Services
    • /
    • v.16 no.6
    • /
    • pp.11-21
    • /
    • 2015
  • As growing interest in Human-Computer Interaction(HCI), research on HCI has been actively conducted. Also with that, research on Natural User Interface/Natural User eXperience(NUI/NUX) that uses user's gesture and voice has been actively conducted. In case of NUI/NUX, it needs recognition algorithm such as gesture recognition or voice recognition. However these recognition algorithms have weakness because their implementation is complex and a lot of time are needed in training because they have to go through steps including preprocessing, normalization, feature extraction. Recently, Kinect is launched by Microsoft as NUI/NUX development tool which attracts people's attention, and studies using Kinect has been conducted. The authors of this paper implemented hand-mouse interface with outstanding intuitiveness using the physical features of a user in a previous study. However, there are weaknesses such as unnatural movement of mouse and low accuracy of mouse functions. In this study, we designed and implemented a hand mouse interface which introduce a new concept called 'Virtual monitor' extracting user's physical features through Kinect in real-time. Virtual monitor means virtual space that can be controlled by hand mouse. It is possible that the coordinate on virtual monitor is accurately mapped onto the coordinate on real monitor. Hand-mouse interface based on virtual monitor concept maintains outstanding intuitiveness that is strength of the previous study and enhance accuracy of mouse functions. Further, we increased accuracy of the interface by recognizing user's unnecessary actions using his concentration indicator from his encephalogram(EEG) data. In order to evaluate intuitiveness and accuracy of the interface, we experimented it for 50 people from 10s to 50s. As the result of intuitiveness experiment, 84% of subjects learned how to use it within 1 minute. Also, as the result of accuracy experiment, accuracy of mouse functions (drag(80.4%), click(80%), double-click(76.7%)) is shown. The intuitiveness and accuracy of the proposed hand-mouse interface is checked through experiment, this is expected to be a good example of the interface for controlling the system by hand in the future.