• Title/Summary/Keyword: 핸드 제스처

Search Result 22, Processing Time 0.022 seconds

(Learning Gesture Command for User-Dependent Interface) (사용자 의존적 인터페이스를 위한 제스처 명령어 습득)

  • 양선옥;양선옥
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1997.11a
    • /
    • pp.215-219
    • /
    • 1997
  • 명령어로 사용되는 핸드 제스처가 사용자에게 더 많은 친밀감을 주기 위해서는 사용자가 자신의 원하는 형태로 제스처를 정의할 수 있어야 한다. 본 논문에서는 카메라를 통해 입력되는 사용자의 핸드 제스처를 명령어로 이용하는 지능적 사용자 인터페이스에 대하여 소개한다. 지능적 사용자 인터페이스는 제스처 명령어로 이용되는 핸드 제스처의 종류를 사용자가 임의대로 정의할 수 있도록 제스처 명령어 습득 모듈을 포함한다.

  • PDF

Hand Gesture Sequence Recognition using Morphological Chain Code Edge Vector (형태론적 체인코드 에지벡터를 이용한 핸드 제스처 시퀀스 인식)

  • Lee Kang-Ho;Choi Jong-Ho
    • Journal of the Korea Society of Computer and Information
    • /
    • v.9 no.4 s.32
    • /
    • pp.85-91
    • /
    • 2004
  • The use of gestures provides an attractive alternate to cumbersome interface devices for human-computer interaction. This has motivated a very active research area concerned with computer vision-based analysis and interpretation of hand gestures The most important issues in gesture recognition are the simplification of algorithm and the reduction of processing time. The mathematical morphology based on geometrical set theory is best used to perform the processing. The key idea of proposed algorithm is to track a trajectory of center points in primitive elements extracted by morphological shape decomposition. The trajectory of morphological center points includes the information on shape orientation. Based on this characteristic we proposed the morphological gesture sequence recognition algorithm using feature vectors calculated to the trajectory of morphological center points. Through the experiment, we demonstrated the efficiency of proposed algorithm.

  • PDF

A Development of Intuitive Single-Hand Gesture Interface For writing Hangul by Leap motion (립모션 기반의 직관적 한글 입력 핸드 제스처 인터페이스 개발)

  • Kim, Seonghyeon;Kim, Daecheon;Park, Yechan;Yeom, Sanggil;Choo, Hyunseung
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2016.10a
    • /
    • pp.768-770
    • /
    • 2016
  • 현재 NUI(Natural User Interface)는 차세대 입력 방식으로 주목을 받고 있다. 이미 한글 입력에 관한 NUI가 다양하게 연구 및 개발되고 있지만, 한글 입력 NUI는 직관성과 정확도의 부족과 불완전한 인식률 등의 한계점이 존재한다. 본 연구에서는 사용자의 핸드 제스처를 인식하기 위해 Leap Motion 장치를 사용하고, 한글의 글자 조합 원리를 바탕으로 자음과 모음 입력의 제스처를 분리하여 인식의 정확도를 높인다. 그리고 모음의 방향성을 참고하여 한글 입력에 직관성을 향상할 수 있는 핸드 제스처를 연구한다. 이를 통해 사용자가 NUI 환경의 디바이스를 좀 더 정확하고 빠르게 조작할 수 있도록 돕는다.

User-Defined Hand Gestures for Small Cylindrical Displays (소형 원통형 디스플레이를 위한 사용자 정의 핸드 제스처)

  • Kim, Hyoyoung;Kim, Heesun;Lee, Dongeon;Park, Ji-hyung
    • The Journal of the Korea Contents Association
    • /
    • v.17 no.3
    • /
    • pp.74-87
    • /
    • 2017
  • This paper aims to elicit user-defined hand gestures for the small cylindrical displays with flexible displays which has not emerged as a product yet. For this, we first defined the size and functions of a small cylindrical display, and elicited the tasks for operating its functions. Henceforward we implemented the experiment environment which is similar to real cylindrical display usage environment by developing both of a virtual cylindrical display interface and a physical object for operating the virtual cylindrical display. And we showed the results of each task in the virtual cylindrical display to the participants so they could define the hand gestures which are suitable for each task in their opinion. We selected the representative gestures for each task by choosing the gestures of the largest group in each task, and we also calculated agreement scores for each task. Finally we observed mental model of the participants which was applied for eliciting the gestures, based on analyzing the gestures and interview results from the participants.

Touchless Media Art Kiosks using Gesture Recognition (제스처 인식을 이용한 비접촉식 미디어 아트 키오스크)

  • Kim, Yura;Seo, Somyi;Park, Gooman
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2022.11a
    • /
    • pp.232-235
    • /
    • 2022
  • 본 논문은 딥러닝과 핸드 제스처, 미디어 파이프를 이용하여 비접촉식 키오스크를 만들어 (코로나19로 인한) 감염병 예방 및 대형마트에서 원하는 물건을 검색 후 편리한 구매를 위해 구현한 내용을 다룬다. 단순한 핸드 제스처 인식만을 이용한 것이 아니라 그림을 그리는 비접촉 제스처 인식을 더해 하나의 예술로써 제스처 인식을 더 많이 사용하고자 했다. 또한 하나의 손가락으로 그림을 그리는 것이 아닌 주먹을 쥐었을 때 그 중심을 인식해 그림을 그리는 방법을 이용해 기존의 방법보다 더 안정감 있게 그릴 수 있게 구현하였다. 현실에서 사용하기 위해 세부적인 기능들은 학습을 통해 기존보다 정확도가 향상된 미디어 파이프를 이용하였다. 빠른 처리 속도와 정확성에 초점을 두는 것이 아닌 하나의 미디어 아트로써 키오스크를 설계하였다.

  • PDF

A Study on Hand Gesture Classification Deep learning method device based on RGBD Image (RGBD 이미지 기반 핸드제스처 분류 딥러닝 기법의 연구)

  • Park, Jong-Chan;Li, Yan;Shin, Byeong-Seok
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2019.10a
    • /
    • pp.1173-1175
    • /
    • 2019
  • 소음이 심하거나 긴급한 상황 등에서 서로 다른 핸드제스처에 대한 인식을 컴퓨터의 입력으로 받고 이를 특정 명령으로 인식하는 등의 연구가 로봇 분야에서 연구되고 있다. 그러나 핸드제스처에 대한 전처리 과정에서 RGB데이터를 활용하거나 또는 스켈레톤을 활용하는 연구들이 다양하게 연구되었지만, 실생활에서의 노이즈가 많아 분류 정확도가 높지 않거나 컴퓨팅 파워의 사용이 과다한 문제가 발생했다. 본 논문에서는 RGBD 이미지를 사용하여 Hand Gesture를 트레이닝 받은 Keras 모델을 통해 입력받은 Hand Gesture을 분류하는 연구를 진행하였다. Depth Camera를 통하여 입력받은 Hand Gesture Raw-Data를 Image로 재구성하여 딥러닝을 진행하였다.

Automatic hand gesture area extraction and recognition technique using FMCW radar based point cloud and LSTM (FMCW 레이다 기반의 포인트 클라우드와 LSTM을 이용한 자동 핸드 제스처 영역 추출 및 인식 기법)

  • Seung-Tak Ra;Seung-Ho Lee
    • Journal of IKEEE
    • /
    • v.27 no.4
    • /
    • pp.486-493
    • /
    • 2023
  • In this paper, we propose an automatic hand gesture area extraction and recognition technique using FMCW radar-based point cloud and LSTM. The proposed technique has the following originality compared to existing methods. First, unlike methods that use 2D images as input vectors such as existing range-dopplers, point cloud input vectors in the form of time series are intuitive input data that can recognize movement over time that occurs in front of the radar in the form of a coordinate system. Second, because the size of the input vector is small, the deep learning model used for recognition can also be designed lightly. The implementation process of the proposed technique is as follows. Using the distance, speed, and angle information measured by the FMCW radar, a point cloud containing x, y, z coordinate format and Doppler velocity information is utilized. For the gesture area, the hand gesture area is automatically extracted by identifying the start and end points of the gesture using the Doppler point obtained through speed information. The point cloud in the form of a time series corresponding to the viewpoint of the extracted gesture area is ultimately used for learning and recognition of the LSTM deep learning model used in this paper. To evaluate the objective reliability of the proposed technique, an experiment calculating MAE with other deep learning models and an experiment calculating recognition rate with existing techniques were performed and compared. As a result of the experiment, the MAE value of the time series point cloud input vector + LSTM deep learning model was calculated to be 0.262 and the recognition rate was 97.5%. The lower the MAE and the higher the recognition rate, the better the results, proving the efficiency of the technique proposed in this paper.

Hand Movement Tracking and Recognizing Hand Gestures (핸드 제스처를 인식하는 손동작 추적)

  • Park, Kwang-Chae;Bae, Ceol-Soo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.14 no.8
    • /
    • pp.3971-3975
    • /
    • 2013
  • This paper introduces an Augmented Reality system recognizing hand gestures and shows results of the evaluation. The system's user can interact with artificial objects and manipulate their position and motions simply by his hand gestures. Hand gesture recognition is based on Histograms of Oriented Gradients (HOG). Salient features of human hand appearance are detected by HOG blocks. Blocks of different sizes are tested to define the most suitable configuration. To select the most informative blocks for classification multiclass AdaBoostSVM algorithm is applied. Evaluated recognition rate of the algorithm is 94.0%.

A Prototype Design for a Real-time VR Game with Hand Tracking Using Affordance Elements

  • Yu-Won Jeong
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.5
    • /
    • pp.47-53
    • /
    • 2024
  • In this paper, we propose applying interactive technology in virtual environments to enhance interaction and immersion by inducing more natural movements in the gesture recognition process through the concept of affordance. A technique is proposed to recognize gestures most similar to actual hand movements by applying a line segment recognition algorithm, incorporating sampling and normalization processes in the gesture recognition process. This line segment recognition was applied to the drawing of magic circles in the <VR Spell> game implemented in this paper. The experimental method verified the recognition rates for four line segment recognition actions. This paper aims to propose a VR game that pursues greater immersion and fun for the user through real-time hand tracking technology using affordance Elements, applied to immersive content in virtual environments such as VR games.

Design and Evaluation of a Hand-held Device for Recognizing Mid-air Hand Gestures (공중 손동작 인식을 위한 핸드 헬드형 기기의 설계 및 평가)

  • Seo, Kyeongeun;Cho, Hyeonjoong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.4 no.2
    • /
    • pp.91-96
    • /
    • 2015
  • We propose AirPincher, a handheld pointing device for recognizing delicate mid-air hand gestures to control a remote display. AirPincher is designed to overcome disadvantages of the two kinds of existing hand gesture-aware techniques such as glove-based and vision-based. The glove-based techniques cause cumbersomeness of wearing gloves every time and the vision-based techniques incur performance dependence on distance between a user and a remote display. AirPincher allows a user to hold the device in one hand and to generate several delicate finger gestures. The gestures are captured by several sensors proximately embedded into AirPincher. These features help AirPincher avoid the aforementioned disadvantages of the existing techniques. We experimentally find an efficient size of the virtual input space and evaluate two types of pointing interfaces with AirPincher for a remote display. Our experiments suggest appropriate configurations to use the proposed device.