• Title/Summary/Keyword: 제스처입력

Search Result 131, Processing Time 0.024 seconds

Automatic hand gesture area extraction and recognition technique using FMCW radar based point cloud and LSTM (FMCW 레이다 기반의 포인트 클라우드와 LSTM을 이용한 자동 핸드 제스처 영역 추출 및 인식 기법)

  • Seung-Tak Ra;Seung-Ho Lee
    • Journal of IKEEE
    • /
    • v.27 no.4
    • /
    • pp.486-493
    • /
    • 2023
  • In this paper, we propose an automatic hand gesture area extraction and recognition technique using FMCW radar-based point cloud and LSTM. The proposed technique has the following originality compared to existing methods. First, unlike methods that use 2D images as input vectors such as existing range-dopplers, point cloud input vectors in the form of time series are intuitive input data that can recognize movement over time that occurs in front of the radar in the form of a coordinate system. Second, because the size of the input vector is small, the deep learning model used for recognition can also be designed lightly. The implementation process of the proposed technique is as follows. Using the distance, speed, and angle information measured by the FMCW radar, a point cloud containing x, y, z coordinate format and Doppler velocity information is utilized. For the gesture area, the hand gesture area is automatically extracted by identifying the start and end points of the gesture using the Doppler point obtained through speed information. The point cloud in the form of a time series corresponding to the viewpoint of the extracted gesture area is ultimately used for learning and recognition of the LSTM deep learning model used in this paper. To evaluate the objective reliability of the proposed technique, an experiment calculating MAE with other deep learning models and an experiment calculating recognition rate with existing techniques were performed and compared. As a result of the experiment, the MAE value of the time series point cloud input vector + LSTM deep learning model was calculated to be 0.262 and the recognition rate was 97.5%. The lower the MAE and the higher the recognition rate, the better the results, proving the efficiency of the technique proposed in this paper.

Gesture Recognition and Motion Evaluation Using Appearance Information of Pose in Parametric Gesture Space (파라메트릭 제스처 공간에서 포즈의 외관 정보를 이용한 제스처 인식과 동작 평가)

  • Lee, Chil-Woo;Lee, Yong-Jae
    • Journal of Korea Multimedia Society
    • /
    • v.7 no.8
    • /
    • pp.1035-1045
    • /
    • 2004
  • In this paper, we describe a method that can recognize gestures and evaluate the degree of the gestures from sequential gesture images by using Gesture Feature Space. The previous popular methods based on HMM and neural network have difficulties in recognizing the degree of gesture even though it can classify gesture into some kinds. However, our proposed method can recognize not only posture but also the degree information of the gestures, such as speed and magnitude by calculating distance among the position vectors substituting input and model images in parametric eigenspace. This method which can be applied in various applications such as intelligent interface systems and surveillance systems is a simple and robust recognition algorithm.

  • PDF

Recognition-Based Gesture Spotting for Video Game Interface (비디오 게임 인터페이스를 위한 인식 기반 제스처 분할)

  • Han, Eun-Jung;Kang, Hyun;Jung, Kee-Chul
    • Journal of Korea Multimedia Society
    • /
    • v.8 no.9
    • /
    • pp.1177-1186
    • /
    • 2005
  • In vision-based interfaces for video games, gestures are used as commands of the games instead of pressing down a keyboard or a mouse. In these Interfaces, unintentional movements and continuous gestures have to be permitted to give a user more natural interface. For this problem, this paper proposes a novel gesture spotting method that combines spotting with recognition. It recognizes the meaningful movements concurrently while separating unintentional movements from a given image sequence. We applied our method to the recognition of the upper-body gestures for interfacing between a video game (Quake II) and its user. Experimental results show that the proposed method is on average $93.36\%$ in spotting gestures from continuous gestures, confirming its potential for a gesture-based interface for computer games.

  • PDF

Emotional System Applied to Android Robot for Human-friendly Interaction (인간 친화적 상호작용을 위한 안드로이드 로봇의 감성 시스템)

  • Lee, Tae-Geun;Lee, Dong-Uk;So, Byeong-Rok;Lee, Ho-Gil
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2007.04a
    • /
    • pp.95-98
    • /
    • 2007
  • 본 논문은 한국생산기술연구원에서 개발된 안드로이드 로봇(EveR Series) 플랫폼에 적용된 감성 시스템에 관한 내용을 제시한다. EveR 플랫폼은 얼굴 표정, 제스처, 음성합성을 수행 할 수 있는 플랫폼으로써 감성 시스템을 적용하여 인간 친화적인 상호작용을 원활하게 한다. 감성 시스템은 로봇에 동기를 부여하는 동기 모듈(Motivation Module), 다양한 감정들을 가지고 있는 감정 모듈(Emotion Module), 감정들, 제스처, 음성에 영향을 미치는 성격 모듈(Personality Module), 입력 받은 자극들과 상황들에 가중치를 결정하는 기억 모듈(Memory Module)로 구성되어 있다. 감성 시스템은 입력으로 음성, 텍스트, 비전, 촉각 및 상황 정보가 들어오고 감정의 선택과 가중치, 행동, 제스처를 출력하여 인간과의 대화에 있어서 자연스러움을 유도한다.

  • PDF

Implement of the User Gesture-Command Recognition System for Windows Application (윈도우 어플리케이션을 위한 사용자 제스쳐-명령 (User Gesture-command) 인식 시스템의 구현)

  • Jang, Sung Won;Sim, Woo Sub;Park, Byeong Ho;Choi, Yong Seok;Seong, Hyeong Kyeong
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2012.05a
    • /
    • pp.59-60
    • /
    • 2012
  • 본 논문에서는 윈도우 애플리케이션을 위한 사용자의 손동작 제스처 인식 시스템을 구현하였다. 실시간 제스처 인식 시스템은 사용자의 손 움직임을 웹캠으로 입력받아 제스처 명령으로 구현하였다. MFC Visual C++언어를 기반으로 개발된 인텔사의 OpenCV 라이브러리를 사용하였다. 인식시스템의 유효성을 검사하기 위하여 사용자의 입력을 받아 진행되는 사격게임을 개발 하였다.

  • PDF

Design of User Gesture based Contactless Device Control System using OpenCV (OpenCV를 이용한 사용자 제스처 기반 비접촉 디바이스 제어 시스템 설계)

  • Lee, Se-Hoon;Hong, Seung-Min;Im, Hong-gab
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2022.07a
    • /
    • pp.485-486
    • /
    • 2022
  • 본 논문에서는 OpenCV를 통해 기존의 물리적 버튼 등을 이용한 접촉제어가 아닌 비접촉으로 제어할 수 있는 시스템을 설계하였다. 시스템은 디지털 입력이나 아날로그 입력 등 두 가지 형태로의 모든 입력이 가능하며, 현실 세계에서 사람들이 사용하는 입력 방식과 유사한 방식으로 제어를 하게 함으로써 사용에 대한 기술에 대한 거부감을 줄일 수 있었다. 시스템은 ESP32-CAM 보드와 영상 스트리밍 서버를 FastAPI로 구축해 원격으로 접속과 조작이 가능하도록 설계하였다.

  • PDF

TheReviser: A Real-time Gesture Interface for a Document Polishing Application on the Projection Display (리바이저: 프로젝션 화면에서 문서교정을 위한 실시간 제스처 인터페이스)

  • Moon, Chae-Hyun;Kang, Hyun;Jung, Kee-Chul;Kim, Hang-Joon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2003.11a
    • /
    • pp.601-604
    • /
    • 2003
  • 본 논문은 프로젝션 화면에서 사용자와 컴퓨터간의 상호작용을 위한 인터페이스인 리바이저를 제안한다. 리바이저는 프로젝션 화면에서 사용자의 손 제스처로 문서교정 부호를 인식하여 전자문서를 교정하는 인터페이스 시스템이다. 프로젝션 화면에서 이러한 제스처를 인식하려면 두 가지 문제가 있다. 하나는 다양한 빛의 변화와 복잡한 배경을 가지는 프로젝션 화면에서 전방물체를 검출하는 것이고, 다른 하나는 사용자의 연속적인 움직임에서 사용자의 의도를 알아내는 것이다. 전방물체를 검출하기 위해 프로젝터로 입력되는 영상과 카메라에 의해 캡쳐된 영상 사이의 색상정보를 사용한다. 이 색상정보는 다양한 빛의 변화와 복잡한 배경에서 전방물체를 구별할 수 있게 한다. 연속된 제스처를 인식하기 위해 적출과 인식이 포함된 새로운 제스처 적출방법을 제안한다. 실험결과 제안된 시스템에서 의미없는 움직임이 포함된 제스처 시퀀스에서 사용자 제스처의 인식률은 평균 93.22%로 만족할만한 결과를 보였다.

  • PDF

A Study on Gesture Recognition using Edge Orientation Histogram and HMM (에지 방향성 히스토그램과 HMM을 이용한 제스처 인식에 관한 연구)

  • Lee, Kee-Jun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.15 no.12
    • /
    • pp.2647-2654
    • /
    • 2011
  • In this paper, the algorithm that recognizes the gesture by configuring the feature information obtained through edge orientation histogram and principal component analysis as low dimensional gesture symbol was described. Since the proposed method doesn't require a lot of computations compared to the existing geometric feature based method or appearance based methods and it can maintain high recognition rate by using the minimum information, it is very well suited for real-time system establishment. In addition, to reduce incorrect recognition or recognition errors that occur during gesture recognition, the model feature values projected in the gesture space is configured as a particular status symbol through clustering algorithm to be used as input symbol of hidden Markov models. By doing so, any input gesture will be recognized as the corresponding gesture model with highest probability.

The Chinese Characters Learning Contents Based on Gesture Recognition Using HMM Algorithm (HMM을 이용한 제스처 인식 기반 한자 학습 콘텐츠)

  • Song, Dae-Hyeon;Kim, Dong-Min;Lee, Chil-Woo
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.8
    • /
    • pp.1067-1074
    • /
    • 2012
  • In this paper, we proposed a contents of Chinese characters learning based on gesture recognition using HMM(hidden markov model) algorithm. Input image of the system is obtained in 3-dimensional information from the TOF camera, and the method of gesture recognition is consisted of part of forecasting user's posture in two infrared images and part of recognizing gestures from continuous poses. In the communication between human and computer, this system provided convenience that user can manipulate it easily by not using any further equipment but action. Because this system raise immersion and interest by using two large display and various multimedia factor, it can maximize information transmission. The edutainment Chinese character contents proposed in this paper provide educational effect that use can master Chinese character naturally with interest, and it can be expected a synergy effect via content experience because it is based on gesture recognition.

Emotion-based Gesture Stylization For Animated SMS (모바일 SMS용 캐릭터 애니메이션을 위한 감정 기반 제스처 스타일화)

  • Byun, Hae-Won;Lee, Jung-Suk
    • Journal of Korea Multimedia Society
    • /
    • v.13 no.5
    • /
    • pp.802-816
    • /
    • 2010
  • To create gesture from a new text input is an important problem in computer games and virtual reality. Recently, there is increasing interest in gesture stylization to imitate the gestures of celebrities, such as announcer. However, no attempt has been made so far to stylize a gestures using emotion such as happiness and sadness. Previous researches have not focused on real-time algorithm. In this paper, we present a system to automatically make gesture animation from SMS text and stylize the gesture from emotion. A key feature of this system is a real-time algorithm to combine gestures with emotion. Because the system's platform is a mobile phone, we distribute much works on the server and client. Therefore, the system guarantees real-time performance of 15 or more frames per second. At first, we extract words to express feelings and its corresponding gesture from Disney video and model the gesture statistically. And then, we introduce the theory of Laban Movement Analysis to combine gesture and emotion. In order to evaluate our system, we analyze user survey responses.