• Title/Summary/Keyword: Gesture Interface

Search Result 231, Processing Time 0.028 seconds

A Real Time Low-Cost Hand Gesture Control System for Interaction with Mechanical Device (기계 장치와의 상호작용을 위한 실시간 저비용 손동작 제어 시스템)

  • Hwang, Tae-Hoon;Kim, Jin-Heon
    • Journal of IKEEE
    • /
    • v.23 no.4
    • /
    • pp.1423-1429
    • /
    • 2019
  • Recently, a system that supports efficient interaction, a human machine interface (HMI), has become a hot topic. In this paper, we propose a new real time low-cost hand gesture control system as one of vehicle interaction methods. In order to reduce computation time, depth information was acquired using a time-of-flight (TOF) camera because it requires a large amount of computation when detecting hand regions using an RGB camera. In addition, fourier descriptor were used to reduce the learning model. Since the Fourier descriptor uses only a small number of points in the whole image, it is possible to miniaturize the learning model. In order to evaluate the performance of the proposed technique, we compared the speeds of desktop and raspberry pi2. Experimental results show that performance difference between small embedded and desktop is not significant. In the gesture recognition experiment, the recognition rate of 95.16% is confirmed.

(Learning Gesture Command for User-Dependent Interface) (사용자 의존적 인터페이스를 위한 제스처 명령어 습득)

  • 양선옥;양선옥
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1997.11a
    • /
    • pp.215-219
    • /
    • 1997
  • 명령어로 사용되는 핸드 제스처가 사용자에게 더 많은 친밀감을 주기 위해서는 사용자가 자신의 원하는 형태로 제스처를 정의할 수 있어야 한다. 본 논문에서는 카메라를 통해 입력되는 사용자의 핸드 제스처를 명령어로 이용하는 지능적 사용자 인터페이스에 대하여 소개한다. 지능적 사용자 인터페이스는 제스처 명령어로 이용되는 핸드 제스처의 종류를 사용자가 임의대로 정의할 수 있도록 제스처 명령어 습득 모듈을 포함한다.

  • PDF

A Study on User Interface for Quiz Game Contents using Gesture Recognition (제스처인식을 이용한 퀴즈게임 콘텐츠의 사용자 인터페이스에 대한 연구)

  • Ahn, Jung-Ho
    • Journal of Digital Contents Society
    • /
    • v.13 no.1
    • /
    • pp.91-99
    • /
    • 2012
  • In this paper we introduce a quiz application program that digitizes the analogue quiz game. We digitize the quiz components such as quiz proceeding, participants recognition, problem presentation, volunteer recognition who raises his hand first, answer judgement, score addition, winner decision, etc, which are manually performed in the normal quiz game. For automation, we obtained the depth images from the kinect camera which comes into the spotlight recently, so that we located the quiz participants and recognized the user-friendly defined gestures. Analyzing the depth distribution, we detected and segmented the upper body parts and located the hands' areas. Also, we extracted hand features and designed the decision function that classified the hand pose into palm, fist or else, so that a participant can select the example that he wants among presented examples. The implemented quiz application program was tested in real time and showed very satisfactory gesture recognition results.

Video event control system by recognition of depth touch (깊이 터치를 통한 영상 이벤트 제어 시스템)

  • Lee, Dong-Seok;Kwon, Soon-Kak
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.21 no.1
    • /
    • pp.35-42
    • /
    • 2016
  • Various events of stop, playback, capture, and zoom-in/out in playing video is available in the monitor of a small size such as smart phones. However, if the size of the display increases, then the cost of the touch recognition is increased, thus provision of a touch event is not possible in practice. In this paper, we propose a video event control system that recognizes a touch inexpensively from the depth information, then provides the variety of events of the toggle, the pinch-in / out by the single or multi-touch. The proposed method finds a touch location and the touch path by the depth information from a depth camera, and determines the touch gesture type. This touch interface algorithm is implemented in a small single-board system, and can control the video event by sending the gesture information through the UART communication. Simulation results show that the proposed depth touch method can control the various events in a large screen.

Performance Improvement of Facial Gesture-based User Interface Using MediaPipe Face Mesh (MediaPipe Face Mesh를 이용한 얼굴 제스처 기반의 사용자 인터페이스의 성능 개선)

  • Jinwang Mok;Noyoon Kwak
    • Journal of Internet of Things and Convergence
    • /
    • v.9 no.6
    • /
    • pp.125-134
    • /
    • 2023
  • The purpose of this paper is to propose a method to improve the performance of the previous research is characterized by recognizing facial gestures from the 3D coordinates of seven landmarks selected from the MediaPipe Face Mesh model, generating corresponding user events, and executing corresponding commands. The proposed method applied adaptive moving average processing to the cursor positions in the process to stabilize the cursor by alleviating microtremor, and improved performance by blocking temporary opening/closing discrepancies between both eyes when opening and closing both eyes simultaneously. As a result of the usability evaluation of the proposed facial gesture interface, it was confirmed that the average recognition rate of facial gestures was increased to 98.7% compared to 95.8% in the previous research.

A Gesture Interface based on Hologram and Haptics Environments for Interactive and Immersive Experiences (상호작용과 몰입 향상을 위한 홀로그램과 햅틱 환경 기반의 동작 인터페이스)

  • Pyun, Hae-Gul;An, Haeng-A;Yuk, Seongmin;Park, Jinho
    • Journal of Korea Game Society
    • /
    • v.15 no.1
    • /
    • pp.27-34
    • /
    • 2015
  • This paper proposes a user interface for enhancing immersiveness and usability by combining hologram and haptic device with common Leap Motion. While Leap Motion delivers physical motion of user hand to control virtual environment, it is limited to handle virtual hands on screen and interact with virtual environment in one way. In our system, hologram is coupled with Leap Motion to improve user immersiveness by arranging real and virtual hands in the same place. Moreover, we provide a interaction prototype of sense by designing a haptic device to convey touch sense in virtual environment to user's hand.

Augmented Reality Interface Using Efficient Hand Gesture Recognition (효율적인 손동작 인식을 이용한 증강현실 인터페이스)

  • Choi, Jun-Yeong;Park, Han-Hoon;Park, Jong-Il
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.91-96
    • /
    • 2008
  • 증강현실(Augmented Reality)을 위한 효과적인 비전 기반 인터페이스 개발은 꾸준히 진행되어 왔으나, 대부분 환경적 제약을 받거나, 특수한 장비 혹은 복잡한 모델을 요구한다. 예를 들어, 마커를 이용하면 구현 상의 편의성과 정확성을 보장하지만, 일반적으로 마커는 환경과 대비되는 모양을 가지기 때문에, 사용자에게 거부감을 줄 수 있으며 무엇보다 복잡한 인터랙션에는 적용되기 힘들다. 한편, 손동작을 이용할 경우, 자연스럽고 다양한 인터랙션을 수행할 수 있지만, 색을 이용한 손동작 인식은 복잡한 환경에서 인식률이 크게 저하되고, 3 차원 모델 기반의 손동작 인식은 많은 연산량을 필요로 한다는 문제점을 가진다. 이로 인해 지금까지 제안된 방법을 증강현실 시스템에 적용하는 데는 한계가 있다. 본 논문에서는 기본적으로 손동작을 이용한 인터페이스를 제안하는데, 손동작 인식을 위한 알고리즘을 효율적으로 개선함으로써, 복잡한 환경에서 적은 연산량으로 자연스러운 인터랙션을 제공하고자 한다. 제안방법은 손목에 컬러 밴드를 착용하고, 색 정보를 이용하여 손을 포함하는 최소 영역을 용이하게 검출함으로써, 손 동작 인식률이 좋아지도록 하였다. 제안된 인터페이스는 손의 자연스러운 움직임을 감지해서 손의 모양과 동작에 따라서 가상의 물체를 자연스럽게 제어할 수 있도록 해 준다. 예를 들어, 손이 지정한 위치에 가상의 물체를 나타내고, 가상의 물체를 잡고 다양한 조작을 하는 등의 제어를 할 수 있다. 다양한 환경에서의 실험 및 사용자 평가를 통해 제안된 인터페이스의 유용성을 검증하였다.

  • PDF

Natural User Interface with Self-righting Feature using Gravity (중력에 기반한 자연스러운 사용자 인터페이스)

  • Kim, Seung-Chan;Lim, Jong-Gwan;Bianchi, Andrea;Koo, Seong-Yong;Kwon, Dong-Soo
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.384-389
    • /
    • 2009
  • In general, gestures can be utilized in human-computer interaction area. Even though the acceleration information is most widely used for the detection of user’s intention, it is hard to use the information under the condition of zero or small variations of gesture velocity due to the inherent characteristics of the accelerometer. In this paper, a natural interaction method which does not require excessive gesture acceleration will be described. Taking advantages of the gravity, the system can generate various types of signals. Also, many problems such as initialization and draft error can be solved using restorative uprighting force of the system.

  • PDF

Robot User Control System using Hand Gesture Recognizer (수신호 인식기를 이용한 로봇 사용자 제어 시스템)

  • Shon, Su-Won;Beh, Joung-Hoon;Yang, Cheol-Jong;Wang, Han;Ko, Han-Seok
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.4
    • /
    • pp.368-374
    • /
    • 2011
  • This paper proposes a robot control human interface using Markov model (HMM) based hand signal recognizer. The command receiving humanoid robot sends webcam images to a client computer. The client computer then extracts the intended commanding hum n's hand motion descriptors. Upon the feature acquisition, the hand signal recognizer carries out the recognition procedure. The recognition result is then sent back to the robot for responsive actions. The system performance is evaluated by measuring the recognition of '48 hand signal set' which is created randomly using fundamental hand motion set. For isolated motion recognition, '48 hand signal set' shows 97.07% recognition rate while the 'baseline hand signal set' shows 92.4%. This result validates the proposed hand signal recognizer is indeed highly discernable. For the '48 hand signal set' connected motions, it shows 97.37% recognition rate. The relevant experiments demonstrate that the proposed system is promising for real world human-robot interface application.

Kinect-based Motion Recognition Model for the 3D Contents Control (3D 콘텐츠 제어를 위한 키넥트 기반의 동작 인식 모델)

  • Choi, Han Suk
    • The Journal of the Korea Contents Association
    • /
    • v.14 no.1
    • /
    • pp.24-29
    • /
    • 2014
  • This paper proposes a kinect-based human motion recognition model for the 3D contents control after tracking the human body gesture through the camera in the infrared kinect project. The proposed human motion model in this paper computes the distance variation of the body movement from shoulder to right and left hand, wrist, arm, and elbow. The human motion model is classified into the movement directions such as the left movement, right movement, up, down, enlargement, downsizing. and selection. The proposed kinect-based human motion recognition model is very natural and low cost compared to other contact type gesture recognition technologies and device based gesture technologies with the expensive hardware system.