• Title/Summary/Keyword: Human Gesture Recognition

Search Result 198, Processing Time 0.021 seconds

Recognition of Hand gesture to Human-Computer Interaction (손동작 인식을 통한 Human-Computer Interaction 구현)

  • 이래경;김성신
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.11 no.1
    • /
    • pp.28-32
    • /
    • 2001
  • 인간의 손동작 인식은 오랫동안 언어로서의 역할을 해왔던 통신수단의 한 방법이다. 현대의 사회가 정보화 사회로 진행됨에 따라 보다 빠르고 정확한 의사소통 및 정보의 전달을 필요로 하는 가운데 사람과 컴퓨터간의 상호 연결 혹은 사람의 의사 표현에 있어 기존의 장치들이 가지는 단점을 보안하며 이 부분에 사람의 두 손으로 표현되는 자유로운 몸짓을 이용하려는 연구가 최근에 많이 진행되고 있는 추세이다. 본 논문에선 2차원 입력 영상으로부터 동적인 손동작의 사용 없이 손의 특징을 이용한 새로운 인식 알고리즘을 제안하고, 보다 높은 인식률과 실 시간적 처리를 위해 Radial Basis Function Network 및 부가적인 특징점을 통한 손동작의 인식을 구현하였다. 또한 인식된 손동작의 의미를 바탕으로 인식률 및 손동작 표현의 의미성에 대한 정확도를 판별하기 위해 로봇의 제어에 적용한 실험을 수행하였다.

  • PDF

Design and Implementation of Motion-based Interaction in AR Game (증강현실 게임에서의 동작 기반 상호작용 설계 및 구현)

  • Park, Jong-Seung;Jeon, Young-Jun
    • Journal of Korea Game Society
    • /
    • v.9 no.5
    • /
    • pp.105-115
    • /
    • 2009
  • This article proposes a design and implementation methodology of a gesture-based interface for augmented reality games. The topic of gesture-based augmented reality games is a promising area in the immersive future games using human body motions. However, due to the instability of the current motion recognition technologies, most previous development processes have introduced many ad hoc methods to handle the shortcomings and, hence, the game architectures have become highly irregular and inefficient This article proposes an efficient development methodology for gesture-based augmented reality games through prototyping a table tennis game with a gesture interface. We also verify the applicability of the prototyping mechanism by implementing and demonstrating the augmented reality table tennis game. In the experiments, the implemented prototype has stably tracked real rackets to allow fast movements and interactions without delay.

  • PDF

An Extraction Method of Meaningful Hand Gesture for a Robot Control (로봇 제어를 위한 의미 있는 손동작 추출 방법)

  • Kim, Aram;Rhee, Sang-Yong
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.27 no.2
    • /
    • pp.126-131
    • /
    • 2017
  • In this paper, we propose a method to extract meaningful motion among various kinds of hand gestures on giving commands to robots using hand gestures. On giving a command to the robot, the hand gestures of people can be divided into a preparation one, a main one, and a finishing one. The main motion is a meaningful one for transmitting a command to the robot in this process, and the other operation is a meaningless auxiliary operation to do the main motion. Therefore, it is necessary to extract only the main motion from the continuous hand gestures. In addition, people can move their hands unconsciously. These actions must also be judged by the robot with meaningless ones. In this study, we extract human skeleton data from a depth image obtained by using a Kinect v2 sensor and extract location data of hands data from them. By using the Kalman filter, we track the location of the hand and distinguish whether hand motion is meaningful or meaningless to recognize the hand gesture by using the hidden markov model.

Design of Image Extraction Hardware for Hand Gesture Vision Recognition

  • Lee, Chang-Yong;Kwon, So-Young;Kim, Young-Hyung;Lee, Yong-Hwan
    • Journal of Advanced Information Technology and Convergence
    • /
    • v.10 no.1
    • /
    • pp.71-83
    • /
    • 2020
  • In this paper, we propose a system that can detect the shape of a hand at high speed using an FPGA. The hand-shape detection system is designed using Verilog HDL, a hardware language that can process in parallel instead of sequentially running C++ because real-time processing is important. There are several methods for hand gesture recognition, but the image processing method is used. Since the human eye is sensitive to brightness, the YCbCr color model was selected among various color expression methods to obtain a result that is less affected by lighting. For the CbCr elements, only the components corresponding to the skin color are filtered out from the input image by utilizing the restriction conditions. In order to increase the speed of object recognition, a median filter that removes noise present in the input image is used, and this filter is designed to allow comparison of values and extraction of intermediate values at the same time to reduce the amount of computation. For parallel processing, it is designed to locate the centerline of the hand during scanning and sorting the stored data. The line with the highest count is selected as the center line of the hand, and the size of the hand is determined based on the count, and the hand and arm parts are separated. The designed hardware circuit satisfied the target operating frequency and the number of gates.

MPEG-U based Advanced User Interaction Interface System Using Hand Posture Recognition (손 자세 인식을 이용한 MPEG-U 기반 향상된 사용자 상호작용 인터페이스 시스템)

  • Han, Gukhee;Lee, Injae;Choi, Haechul
    • Journal of Broadcast Engineering
    • /
    • v.19 no.1
    • /
    • pp.83-95
    • /
    • 2014
  • Hand posture recognition is an important technique to enable a natural and familiar interface in HCI(human computer interaction) field. In this paper, we introduce a hand posture recognition method by using a depth camera. Moreover, the hand posture recognition method is incorporated with MPEG-U based advanced user interaction (AUI) interface system, which can provide a natural interface with a variety of devices. The proposed method initially detects positions and lengths of all fingers opened and then it recognizes hand posture from pose of one or two hands and the number of fingers folded when user takes a gesture representing a pattern of AUI data format specified in the MPEG-U part 2. The AUI interface system represents user's hand posture as compliant MPEG-U schema structure. Experimental results show performance of the hand posture recognition and it is verified that the AUI interface system is compatible with the MPEG-U standard.

Study on Hand Gestures Recognition Algorithm of Millimeter Wave (밀리미터파의 손동작 인식 알고리즘에 관한 연구)

  • Nam, Myung Woo;Hong, Soon Kwan
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.7
    • /
    • pp.685-691
    • /
    • 2020
  • In this study, an algorithm that recognizes numbers from 0 to 9 was developed using the data obtained after tracking hand movements using the echo signal of a millimeter-wave radar sensor at 77 GHz. The echo signals obtained from the radar sensor by detecting the motion of a hand gesture revealed a cluster of irregular dots due to the difference in scattering cross-sectional area. A valid center point was obtained from them by applying a K-Means algorithm using 3D coordinate values. In addition, the obtained center points were connected to produce a numeric image. The recognition rate was compared by inputting the obtained image and an image similar to human handwriting by applying the smoothing technique to a CNN (Convolutional Neural Network) model trained with MNIST (Modified National Institute of Standards and Technology database). The experiment was conducted in two ways. First, in the recognition experiments using images with and without smoothing, average recognition rates of 77.0% and 81.0% were obtained, respectively. In the experiment of the CNN model with augmentation of learning data, a recognition rate of 97.5% and 99.0% on average was obtained in the recognition experiment using the image with and without smoothing technique, respectively. This study can be applied to various non-contact recognition technologies using radar sensors.

Detection of Faces Located at a Long Range with Low-resolution Input Images for Mobile Robots (모바일 로봇을 위한 저해상도 영상에서의 원거리 얼굴 검출)

  • Kim, Do-Hyung;Yun, Woo-Han;Cho, Young-Jo;Lee, Jae-Jeon
    • The Journal of Korea Robotics Society
    • /
    • v.4 no.4
    • /
    • pp.257-264
    • /
    • 2009
  • This paper proposes a novel face detection method that finds tiny faces located at a long range even with low-resolution input images captured by a mobile robot. The proposed approach can locate extremely small-sized face regions of $12{\times}12$ pixels. We solve a tiny face detection problem by organizing a system that consists of multiple detectors including a mean-shift color tracker, short- and long-rage face detectors, and an omega shape detector. The proposed method adopts the long-range face detector that is well trained enough to detect tiny faces at a long range, and limiting its operation to only within a search region that is automatically determined by the mean-shift color tracker and the omega shape detector. By focusing on limiting the face search region as much as possible, the proposed method can accurately detect tiny faces at a long distance even with a low-resolution image, and decrease false positives sharply. According to the experimental results on realistic databases, the performance of the proposed approach is at a sufficiently practical level for various robot applications such as face recognition of non-cooperative users, human-following, and gesture recognition for long-range interaction.

  • PDF

Recognition of Hand gesture to Human-Computer Interaction (손동작 인식을 통한 Human-Computer Interactio 구현)

  • 이래경;김성신
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2000.11a
    • /
    • pp.344-348
    • /
    • 2000
  • 인간의 손동작 인식은 오랫동안 언어로서의 역할을 해왔던 통신수단의 한 방법이다. 현대의 사회가 정보화 사회로 진행됨에 따라 보다 빠르고 정확한 의사소통 및 정보의 전달을 필요로 하는 가운데 사람과 컴퓨터간의 상호 연결 혹은 사람의 의사 표현에 있어 기존의 장치들이 가지는 단점을 보안하며 이 부분에 사람의 두 손으로 표현되는 자유로운 몸짓을 이용하려는 연구가 최근에 많이 진행되고 있는 추세이다. 본 논문에선 2차원의 입력 영상으로부터 동적인 손동작의 인식을 위해 복잡하고 시간이 많이 소요되는 기존의 방법과는 다르게 부가적인 특별한 장치의 사용 없이 손의 특징을 이용한 새로운 인식 알고리즘을 제안하고, 보다 높은 인식률과 실 시간적 처리를 위해 Radial Basis Function Network 및 부가적인 특징점을 통한 손동작의 인식을 구현하였다. 또한 인식된 손동작의 의미를 바탕으로 인식률 및 손동작 표현의 의미성에 대한 정확도를 판별하기 위해 로봇의 제어에 적용한 실험을 수행하였다.

  • PDF

Real-Time Gesture Recognition Using Boundary of Human Hands from Sequence Images (손의 외곽선 추출에 의한 실시간 제스처 인식)

  • 이인호;박찬종
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 1999.11a
    • /
    • pp.438-442
    • /
    • 1999
  • 제스처 인식은 직관적일 뿐 아니라, 몇 가지의 기본 구성요소에 의하여 코드화(code)가 용이하여, 인간과 컴퓨터의 상호작용(HCI, Human-Computer Interaction)에 있어서 폭넓게 사용되고 있다. 본 논문에서는 손의 모양이나 크기와 같은 개인차 및 조명의 변화나 배율과 같은 입력환경의 영향을 최소화하여, 특별한 초기화 과정이나 모델의 준비과정 없이도 제스처를 인식할 수 있고, 적은 계산량으로 실시간 인식이 가능한 제스처 인식 시스템의 개발을 목표로 한다. 본 논문에서는 손에 부착하는 센서나 마커 없이, CCD 카메라에 의하여 입력된 컬러영상에서, 컬러정보 및 동작정보를 이용하여 손영역을 추출하고, 추출된 손의 경계선 정보를 이용하여 경계선-중심 거리 함수를 생성했다. 그리고, 손가락의 끝 부분에서는 경계선-중심 거리가 극대점을 이룬다는 원리를 이용하여 생성된 함수의 주파수를 분석하여 극대점을 구함으로써 각각의 손가락 끝 위치를 찾고, 손의 자세를 인식하여 제스처를 인식했다. 또한 본 논문에서 제안된 제스처 인식 방법은 PC상에서 구현되어 그 유용성과 실효성이 증명되었다.

  • PDF

Rotation Invariant 3D Star Skeleton Feature Extraction (회전무관 3D Star Skeleton 특징 추출)

  • Chun, Sung-Kuk;Hong, Kwang-Jin;Jung, Kee-Chul
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.10
    • /
    • pp.836-850
    • /
    • 2009
  • Human posture recognition has attracted tremendous attention in ubiquitous environment, performing arts and robot control so that, recently, many researchers in pattern recognition and computer vision are working to make efficient posture recognition system. However the most of existing studies is very sensitive to human variations such as the rotation or the translation of body. This is why the feature, which is extracted from the feature extraction part as the first step of general posture recognition system, is influenced by these variations. To alleviate these human variations and improve the posture recognition result, this paper presents 3D Star Skeleton and Principle Component Analysis (PCA) based feature extraction methods in the multi-view environment. The proposed system use the 8 projection maps, a kind of depth map, as an input data. And the projection maps are extracted from the visual hull generation process. Though these data, the system constructs 3D Star Skeleton and extracts the rotation invariant feature using PCA. In experimental result, we extract the feature from the 3D Star Skeleton and recognize the human posture using the feature. Finally we prove that the proposed method is robust to human variations.