• Title/Summary/Keyword: 제스처 컴퓨팅

Search Result 16, Processing Time 0.029 seconds

An Efficient Hand Gesture Recognition Method using Two-Stream 3D Convolutional Neural Network Structure (이중흐름 3차원 합성곱 신경망 구조를 이용한 효율적인 손 제스처 인식 방법)

  • Choi, Hyeon-Jong;Noh, Dae-Cheol;Kim, Tae-Young
    • The Journal of Korean Institute of Next Generation Computing
    • /
    • v.14 no.6
    • /
    • pp.66-74
    • /
    • 2018
  • Recently, there has been active studies on hand gesture recognition to increase immersion and provide user-friendly interaction in a virtual reality environment. However, most studies require specialized sensors or equipment, or show low recognition rates. This paper proposes a hand gesture recognition method using Deep Learning technology without separate sensors or equipment other than camera to recognize static and dynamic hand gestures. First, a series of hand gesture input images are converted into high-frequency images, then each of the hand gestures RGB images and their high-frequency images is learned through the DenseNet three-dimensional Convolutional Neural Network. Experimental results on 6 static hand gestures and 9 dynamic hand gestures showed an average of 92.6% recognition rate and increased 4.6% compared to previous DenseNet. The 3D defense game was implemented to verify the results of our study, and an average speed of 30 ms of gesture recognition was found to be available as a real-time user interface for virtual reality applications.

Design of Multimodal User Interface using Speech and Gesture Recognition for Wearable Watch Platform (착용형 단말에서의 음성 인식과 제스처 인식을 융합한 멀티 모달 사용자 인터페이스 설계)

  • Seong, Ki Eun;Park, Yu Jin;Kang, Soon Ju
    • KIISE Transactions on Computing Practices
    • /
    • v.21 no.6
    • /
    • pp.418-423
    • /
    • 2015
  • As the development of technology advances at exceptional speed, the functions of wearable devices become more diverse and complicated, and many users find some of the functions difficult to use. In this paper, the main aim is to provide the user with an interface that is more friendly and easier to use. The speech recognition is easy to use and also easy to insert an input order. However, speech recognition is problematic when using on a wearable device that has limited computing power and battery. The wearable device cannot predict when the user will give an order through speech recognition. This means that while speech recognition must always be activated, because of the battery issue, the time taken waiting for the user to give an order is impractical. In order to solve this problem, we use gesture recognition. This paper describes how to use both speech and gesture recognition as a multimodal interface to increase the user's comfort.

Hand Gesture Interface Using Mobile Camera Devices (모바일 카메라 기기를 이용한 손 제스처 인터페이스)

  • Lee, Chan-Su;Chun, Sung-Yong;Sohn, Myoung-Gyu;Lee, Sang-Heon
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.5
    • /
    • pp.621-625
    • /
    • 2010
  • This paper presents a hand motion tracking method for hand gesture interface using a camera in mobile devices such as a smart phone and PDA. When a camera moves according to the hand gesture of the user, global optical flows are generated. Therefore, robust hand movement estimation is possible by considering dominant optical flow based on histogram analysis of the motion direction. A continuous hand gesture is segmented into unit gestures by motion state estimation using motion phase, which is determined by velocity and acceleration of the estimated hand motion. Feature vectors are extracted during movement states and hand gestures are recognized at the end state of each gesture. Support vector machine (SVM), k-nearest neighborhood classifier, and normal Bayes classifier are used for classification. SVM shows 82% recognition rate for 14 hand gestures.

Real-time hand tracking and recognition based on structured template matching (구조적 템플렛 매칭에 기반을 둔 실시간 손 추적 및 인식)

  • Kim, Song-Gook;Bae, Ki-Tae;Lee, Chil-Woo
    • 한국HCI학회:학술대회논문집
    • /
    • 2006.02a
    • /
    • pp.1037-1043
    • /
    • 2006
  • 본 논문에서는 유비쿼터스 컴퓨팅 오피스 환경에서 가장 직관적인 HCI 수단인 손 제스처를 사용하여 대형 스크린 상의 응용 프로그램들을 쉽게 제어할 수 있는 시스템을 제안한다. 손 제스처는 손 영역의 정보, 손 중심점의 위치 변화값과 손가락 형상을 이용하여 시스템 제어에 필요한 종류들을 미리 정의해 둔다. 먼저 효율적으로 손 영역 획득을 위해 적외선 카메라를 사용하여 연속된 영상을 획득한다. 획득된 영상 프레임으로부터 구조적 템플레이트 매칭 방법을 사용하여 손의 중심(centroid) 및 손가락끝(fingertip)을 검출한다. 인식과정에서는 양손의 Euclidean distance와 손가락 형상 정보를 이용하여 미리 정의된 제스처와 비교하여 인식을 행한다. 본 논문에서 제안한 비전 기반 hand gesture 제어 시스템은 인간과 컴퓨터의 상호작용을 이해하는데 많은 이점을 제공할 수 있다. 실험 결과를 통해 본 논문에서 제안한 방법의 효율성을 입증한다.

  • PDF

A Study on Hand Gesture Classification Deep learning method device based on RGBD Image (RGBD 이미지 기반 핸드제스처 분류 딥러닝 기법의 연구)

  • Park, Jong-Chan;Li, Yan;Shin, Byeong-Seok
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2019.10a
    • /
    • pp.1173-1175
    • /
    • 2019
  • 소음이 심하거나 긴급한 상황 등에서 서로 다른 핸드제스처에 대한 인식을 컴퓨터의 입력으로 받고 이를 특정 명령으로 인식하는 등의 연구가 로봇 분야에서 연구되고 있다. 그러나 핸드제스처에 대한 전처리 과정에서 RGB데이터를 활용하거나 또는 스켈레톤을 활용하는 연구들이 다양하게 연구되었지만, 실생활에서의 노이즈가 많아 분류 정확도가 높지 않거나 컴퓨팅 파워의 사용이 과다한 문제가 발생했다. 본 논문에서는 RGBD 이미지를 사용하여 Hand Gesture를 트레이닝 받은 Keras 모델을 통해 입력받은 Hand Gesture을 분류하는 연구를 진행하였다. Depth Camera를 통하여 입력받은 Hand Gesture Raw-Data를 Image로 재구성하여 딥러닝을 진행하였다.

Interaction Analysis Between Visitors and Gesture-based Exhibits in Science Centers from Embodied Cognition Perspectives (체화된 인지의 관점에서 과학관 제스처 기반 전시물의 관람객 상호작용 분석)

  • So, Hyo-Jeong;Lee, Ji Hyang;Oh, Seung Ja
    • Korea Science and Art Forum
    • /
    • v.25
    • /
    • pp.227-240
    • /
    • 2016
  • This study aims to examine how visitors in science centers interact with gesture-based exhibits from embodied cognition perspectives. Four gesture-based exhibits in two science centers were selected for this study. In addition, we interviewed a total of 14 visitor groups to examine how they perceived the property of gesture-based exhibit. We also interviewed four experts to further examine the benefits and limitations of the current gesture-based exhibits in science centers. The research results indicate that the total amount of interaction time between visitors and gesture-based exhibits was not high overall, implying that there was little of visitors' immersive engagement. Both experts and visitors expressed that the current gesture-based exhibits tend to highlight the novelty effect but little obvious impacts linking gestures and learning. Drawing from the key findings, this study suggests the following design considerations for gesture-based exhibits. First, to increate visitor's initial engagement, the purpose and usability of gesture-based exhibits should be considered from the initial phase of design. Second, to promote meaningful interaction, it is important to sustain visitors' initial engagement. For that, gesture-based exhibits should be transformed to promote intellectual curiosity beyond simple interaction. Third, from embodied cognition perspectives, exhibits design should reflect how the mappings between specific gestures and metaphors affect learning processes. Lastly, this study suggests that future gesture-based exhibits should be designed toward promoting interaction among visitors and adaptive inquiry.

A Deep Learning-based Hand Gesture Recognition Robust to External Environments (외부 환경에 강인한 딥러닝 기반 손 제스처 인식)

  • Oh, Dong-Han;Lee, Byeong-Hee;Kim, Tae-Young
    • The Journal of Korean Institute of Next Generation Computing
    • /
    • v.14 no.5
    • /
    • pp.31-39
    • /
    • 2018
  • Recently, there has been active studies to provide a user-friendly interface in a virtual reality environment by recognizing user hand gestures based on deep learning. However, most studies use separate sensors to obtain hand information or go through pre-process for efficient learning. It also fails to take into account changes in the external environment, such as changes in lighting or some of its hands being obscured. This paper proposes a hand gesture recognition method based on deep learning that is strong in external environments without the need for pre-process of RGB images obtained from general webcam. In this paper we improve the VGGNet and the GoogLeNet structures and compared the performance of each structure. The VGGNet and the GoogLeNet structures presented in this paper showed a recognition rate of 93.88% and 93.75%, respectively, based on data containing dim, partially obscured, or partially out-of-sight hand images. In terms of memory and speed, the GoogLeNet used about 3 times less memory than the VGGNet, and its processing speed was 10 times better. The results of this paper can be processed in real-time and used as a hand gesture interface in various areas such as games, education, and medical services in a virtual reality environment.

An Implementation of Dynamic Gesture Recognizer Based on WPS and Data Glove (WPS와 장갑 장치 기반의 동적 제스처 인식기의 구현)

  • Kim, Jung-Hyun;Roh, Yong-Wan;Hong, Kwang-Seok
    • The KIPS Transactions:PartB
    • /
    • v.13B no.5 s.108
    • /
    • pp.561-568
    • /
    • 2006
  • WPS(Wearable Personal Station) for next generation PC can define as a core terminal of 'Ubiquitous Computing' that include information processing and network function and overcome spatial limitation in acquisition of new information. As a way to acquire significant dynamic gesture data of user from haptic devices, traditional gesture recognizer based on desktop-PC using wire communication module has several restrictions such as conditionality on space, complexity between transmission mediums(cable elements), limitation of motion and incommodiousness on use. Accordingly, in this paper, in order to overcome these problems, we implement hand gesture recognition system using fuzzy algorithm and neural network for Post PC(the embedded-ubiquitous environment using blue-tooth module and WPS). Also, we propose most efficient and reasonable hand gesture recognition interface for Post PC through evaluation and analysis of performance about each gesture recognition system. The proposed gesture recognition system consists of three modules: 1) gesture input module that processes motion of dynamic hand to input data 2) Relational Database Management System(hereafter, RDBMS) module to segment significant gestures from input data and 3) 2 each different recognition modulo: fuzzy max-min and neural network recognition module to recognize significant gesture of continuous / dynamic gestures. Experimental result shows the average recognition rate of 98.8% in fuzzy min-nin module and 96.7% in neural network recognition module about significantly dynamic gestures.

Infrared Sensor Interface for Augment Reality (증강 현실을 위한 적외선 센서 인터페이스)

  • Choi, Han Yong;Jang, Jae Hyuck;Song, Chang Geun;Ko, Young Woong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2010.04a
    • /
    • pp.107-110
    • /
    • 2010
  • 본 연구에서는 증강 현실 환경에서 사용자와의 상호 작용을 효율적으로 수행할 수 있는 적외선 기반 제스처 인터페이스를 제안한다. 제안하는 방식은 적외선 마커를 이용한 간단한 제스처를 통하여 홈오토메이션 시스템의 다양한 인터페이스 처리를 제공한다. 제안하는 시스템 프로토타입을 구현하였으며, 플랫폼을 다수의 사용자들을 대상으로 시연한 결과 직관적이고 쉬운 인터페이스임을 확인할 수 있었다.

Wearable Multi-modal Remote Control (착용형 멀티모달 제어 리모콘)

  • Lee, Dong-Woo;SunWoo, John;Cho, Il-Yeon;Lee, Cheol-Hoon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2008.05a
    • /
    • pp.169-170
    • /
    • 2008
  • 가전 기기제어를 위해서 통상적으로 리모컨을 사용하지만 집안에 넘쳐나는 리모콘 때문에 불편한 점이 있다. 본 논문에서는 URC 처럼 하나의 리모콘으로 사용할 착용형 시스템을 소개하고, 여러 개의 가전기기를 음성, 제스처 등과 같은 다양한 모달리티들을 이용하여 동일한 방법으로 제어 할 수 있는 방법을 제안하고 시스템을 소개한다.