• Title/Summary/Keyword: Gesture based interface

Search Result 168, Processing Time 0.028 seconds

Design of dataglove based multimodal interface for 3D object manipulation in virtual environment (3 차원 오브젝트 직접조작을 위한 데이터 글러브 기반의 멀티모달 인터페이스 설계)

  • Lim, Mi-Jung;Park, Peom
    • 한국HCI학회:학술대회논문집
    • /
    • 2006.02a
    • /
    • pp.1011-1018
    • /
    • 2006
  • 멀티모달 인터페이스는 인간의 제스처, 시선, 손의 움직임, 행동의 패턴, 음성, 물리적인 위치 등 인간의 자연스러운 행동들에 대한 정보를 해석하고 부호화하는 인지기반 기술이다. 본 논문에서는 제스처와 음성, 터치를 이용한 3D 오브젝트 기반의 멀티모달 인터페이스를 설계, 구현한다. 서비스 도메인은 스마트 홈이며 사용자는 3D 오브젝트 직접조작을 통해 원격으로 가정의 오브젝트들을 모니터링하고 제어할 수 있다. 멀티모달 인터랙션 입출력 과정에서는 여러 개의 모달리티를 병렬적으로 인지하고 처리해야 하기 때문에 입출력 과정에서 각 모달리티의 조합과 부호화 방법, 입출력 형식 등이 문제시된다. 본 연구에서는 모달리티들의 특징과 인간의 인지구조 분석을 바탕으로 제스처, 음성, 터치 모달리티 간의 입력조합방식을 제시하고 멀티모달을 이용한 효율적인 3D Object 인터랙션 프로토타입을 설계한다.

  • PDF

A Gesture Interface based on Hologram and Haptics Environments for Interactive and Immersive Experiences (상호작용과 몰입 향상을 위한 홀로그램과 햅틱 환경 기반의 동작 인터페이스)

  • Pyun, Hae-Gul;An, Haeng-A;Yuk, Seongmin;Park, Jinho
    • Journal of Korea Game Society
    • /
    • v.15 no.1
    • /
    • pp.27-34
    • /
    • 2015
  • This paper proposes a user interface for enhancing immersiveness and usability by combining hologram and haptic device with common Leap Motion. While Leap Motion delivers physical motion of user hand to control virtual environment, it is limited to handle virtual hands on screen and interact with virtual environment in one way. In our system, hologram is coupled with Leap Motion to improve user immersiveness by arranging real and virtual hands in the same place. Moreover, we provide a interaction prototype of sense by designing a haptic device to convey touch sense in virtual environment to user's hand.

Augmented Reality Interface Using Efficient Hand Gesture Recognition (효율적인 손동작 인식을 이용한 증강현실 인터페이스)

  • Choi, Jun-Yeong;Park, Han-Hoon;Park, Jong-Il
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.91-96
    • /
    • 2008
  • 증강현실(Augmented Reality)을 위한 효과적인 비전 기반 인터페이스 개발은 꾸준히 진행되어 왔으나, 대부분 환경적 제약을 받거나, 특수한 장비 혹은 복잡한 모델을 요구한다. 예를 들어, 마커를 이용하면 구현 상의 편의성과 정확성을 보장하지만, 일반적으로 마커는 환경과 대비되는 모양을 가지기 때문에, 사용자에게 거부감을 줄 수 있으며 무엇보다 복잡한 인터랙션에는 적용되기 힘들다. 한편, 손동작을 이용할 경우, 자연스럽고 다양한 인터랙션을 수행할 수 있지만, 색을 이용한 손동작 인식은 복잡한 환경에서 인식률이 크게 저하되고, 3 차원 모델 기반의 손동작 인식은 많은 연산량을 필요로 한다는 문제점을 가진다. 이로 인해 지금까지 제안된 방법을 증강현실 시스템에 적용하는 데는 한계가 있다. 본 논문에서는 기본적으로 손동작을 이용한 인터페이스를 제안하는데, 손동작 인식을 위한 알고리즘을 효율적으로 개선함으로써, 복잡한 환경에서 적은 연산량으로 자연스러운 인터랙션을 제공하고자 한다. 제안방법은 손목에 컬러 밴드를 착용하고, 색 정보를 이용하여 손을 포함하는 최소 영역을 용이하게 검출함으로써, 손 동작 인식률이 좋아지도록 하였다. 제안된 인터페이스는 손의 자연스러운 움직임을 감지해서 손의 모양과 동작에 따라서 가상의 물체를 자연스럽게 제어할 수 있도록 해 준다. 예를 들어, 손이 지정한 위치에 가상의 물체를 나타내고, 가상의 물체를 잡고 다양한 조작을 하는 등의 제어를 할 수 있다. 다양한 환경에서의 실험 및 사용자 평가를 통해 제안된 인터페이스의 유용성을 검증하였다.

  • PDF

Robot User Control System using Hand Gesture Recognizer (수신호 인식기를 이용한 로봇 사용자 제어 시스템)

  • Shon, Su-Won;Beh, Joung-Hoon;Yang, Cheol-Jong;Wang, Han;Ko, Han-Seok
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.4
    • /
    • pp.368-374
    • /
    • 2011
  • This paper proposes a robot control human interface using Markov model (HMM) based hand signal recognizer. The command receiving humanoid robot sends webcam images to a client computer. The client computer then extracts the intended commanding hum n's hand motion descriptors. Upon the feature acquisition, the hand signal recognizer carries out the recognition procedure. The recognition result is then sent back to the robot for responsive actions. The system performance is evaluated by measuring the recognition of '48 hand signal set' which is created randomly using fundamental hand motion set. For isolated motion recognition, '48 hand signal set' shows 97.07% recognition rate while the 'baseline hand signal set' shows 92.4%. This result validates the proposed hand signal recognizer is indeed highly discernable. For the '48 hand signal set' connected motions, it shows 97.37% recognition rate. The relevant experiments demonstrate that the proposed system is promising for real world human-robot interface application.

A Deep Learning-based Hand Gesture Recognition Robust to External Environments (외부 환경에 강인한 딥러닝 기반 손 제스처 인식)

  • Oh, Dong-Han;Lee, Byeong-Hee;Kim, Tae-Young
    • The Journal of Korean Institute of Next Generation Computing
    • /
    • v.14 no.5
    • /
    • pp.31-39
    • /
    • 2018
  • Recently, there has been active studies to provide a user-friendly interface in a virtual reality environment by recognizing user hand gestures based on deep learning. However, most studies use separate sensors to obtain hand information or go through pre-process for efficient learning. It also fails to take into account changes in the external environment, such as changes in lighting or some of its hands being obscured. This paper proposes a hand gesture recognition method based on deep learning that is strong in external environments without the need for pre-process of RGB images obtained from general webcam. In this paper we improve the VGGNet and the GoogLeNet structures and compared the performance of each structure. The VGGNet and the GoogLeNet structures presented in this paper showed a recognition rate of 93.88% and 93.75%, respectively, based on data containing dim, partially obscured, or partially out-of-sight hand images. In terms of memory and speed, the GoogLeNet used about 3 times less memory than the VGGNet, and its processing speed was 10 times better. The results of this paper can be processed in real-time and used as a hand gesture interface in various areas such as games, education, and medical services in a virtual reality environment.

Emotional Interface Technologies for Service Robot (서비스 로봇을 위한 감성인터페이스 기술)

  • Yang, Hyun-Seung;Seo, Yong-Ho;Jeong, Il-Woong;Han, Tae-Woo;Rho, Dong-Hyun
    • The Journal of Korea Robotics Society
    • /
    • v.1 no.1
    • /
    • pp.58-65
    • /
    • 2006
  • The emotional interface is essential technology for the robot to provide the proper service to the user. In this research, we developed emotional components for the service robot such as a neural network based facial expression recognizer, emotion expression technologies based on 3D graphical face expression and joints movements, considering a user's reaction, behavior selection technology for emotion expression. We used our humanoid robots, AMI and AMIET as the test-beds of our emotional interface. We researched on the emotional interaction between a service robot and a user by integrating the developed technologies. Emotional interface technology for the service robot, enhance the performance of friendly interaction to the service robot, to increase the diversity of the service and the value-added of the robot for human. and it elevates the market growth and also contribute to the popularization of the robot. The emotional interface technology can enhance the performance of friendly interaction of the service robot. This technology can also increase the diversity of the service and the value-added of the robot for human. and it can elevate the market growth and also contribute to the popularization of the robot.

  • PDF

Augmented Reality Authoring Tool with Marker & Gesture Interactive Features (마커 및 제스처 상호작용이 가능한 증강현실 저작도구)

  • Shim, Jinwook;Kong, Minje;Kim, Hayoung;Chae, Seungho;Jeong, Kyungho;Seo, Jonghoon;Han, Tack-Don
    • Journal of Korea Multimedia Society
    • /
    • v.16 no.6
    • /
    • pp.720-734
    • /
    • 2013
  • In this paper, we suggest an augmented reality authoring tool system that users can easily make augmented reality contents using hand gesture and marker-based interaction methods. The previous augmented reality authoring tools are focused on augmenting a virtual object and to interact with this kind of augmented reality contents, user used the method utilizing marker or sensor. We want to solve this limited interaction method problem by applying marker based interaction method and gesture interaction method using depth sensing camera, Kinect. In this suggested system, user can easily develop simple form of marker based augmented reality contents through interface. Also, not just providing fragmentary contents, this system provides methods that user can actively interact with augmented reality contents. This research provides two interaction methods, one is marker based method using two markers and the other is utilizing marker occlusion. In addition, by recognizing and tracking user's bare hand, this system provides gesture interaction method which can zoom-in, zoom-out, move and rotate object. From heuristic evaluation about authoring tool and compared usability about marker and gesture interaction, this study confirmed a positive result.

Performance Improvement of Facial Gesture-based User Interface Using MediaPipe Face Mesh (MediaPipe Face Mesh를 이용한 얼굴 제스처 기반의 사용자 인터페이스의 성능 개선)

  • Jinwang Mok;Noyoon Kwak
    • Journal of Internet of Things and Convergence
    • /
    • v.9 no.6
    • /
    • pp.125-134
    • /
    • 2023
  • The purpose of this paper is to propose a method to improve the performance of the previous research is characterized by recognizing facial gestures from the 3D coordinates of seven landmarks selected from the MediaPipe Face Mesh model, generating corresponding user events, and executing corresponding commands. The proposed method applied adaptive moving average processing to the cursor positions in the process to stabilize the cursor by alleviating microtremor, and improved performance by blocking temporary opening/closing discrepancies between both eyes when opening and closing both eyes simultaneously. As a result of the usability evaluation of the proposed facial gesture interface, it was confirmed that the average recognition rate of facial gestures was increased to 98.7% compared to 95.8% in the previous research.

Hand Gesture based Manipulation of Meeting Data in Teleconference (핸드제스처를 이용한 원격미팅 자료 인터페이스)

  • Song, Je-Hoon;Choi, Ki-Ho;Kim, Jong-Won;Lee, Yong-Gu
    • Korean Journal of Computational Design and Engineering
    • /
    • v.12 no.2
    • /
    • pp.126-136
    • /
    • 2007
  • Teleconferences have been used in business sectors to reduce traveling costs. Traditionally, specialized telephones that enabled multiparty conversations were used. With the introduction of high speed networks, we now have high definition videos that add more realism in the presence of counterparts who could be thousands of miles away. This paper presents a new technology that adds even more realism by telecommunicating with hand gestures. This technology is part of a teleconference system named SMS (Smart Meeting Space). In SMS, a person can use hand gestures to manipulate meeting data that could be in the form of text, audio, video or 3D shapes. Fer detecting hand gestures, a machine learning algorithm called SVM (Support Vector Machine) has been used. For the prototype system, a 3D interaction environment has been implemented with $OpenGL^{TM}$, where a 3D human skull model can be grasped and moved in 6-DOF during a remote conversation between distant persons.

Accelerometer-based Mobile Game Using the Gestures and Postures (제스처와 자세를 이용한 가속도센서 기반 모바일 게임)

  • Baek, Jong-Hun;Jang, Ik-Jin;Yun, Byoung-Ju
    • Proceedings of the IEEK Conference
    • /
    • 2006.06a
    • /
    • pp.379-380
    • /
    • 2006
  • As a result of growth sensor-enabled mobile devices such as PDA, cellular phone and other computing devices, in recent years, users can utilize the diverse digital contents everywhere and anytime. However, the interfaces of mobile applications are often unnatural due to limited resources and miniaturized input/output. Especially, users may feel this problem in some applications such as the mobile game. Therefore, Novel interaction forms have been developed in order to complement the poor user interface of the mobile device and to increase the interest for the mobile game. In this paper, we describe the demonstration of the gesture and posture input supported by an accelerometer. The application example we created are AM-Fishing game on the mobile device that employs the accelerometer as the main interaction modality. The demos show the usability for the gesture and posture interaction.

  • PDF