• Title/Summary/Keyword: 손인식

Search Result 877, Processing Time 0.025 seconds

Implementation of DID interface using gesture recognition (제스쳐 인식을 이용한 DID 인터페이스 구현)

  • Lee, Sang-Hun;Kim, Dae-Jin;Choi, Hong-Sub
    • Journal of Digital Contents Society
    • /
    • v.13 no.3
    • /
    • pp.343-352
    • /
    • 2012
  • In this paper, we implemented a touchless interface for DID(Digital Information Display) system using gesture recognition technique which includes both hand motion and hand shape recognition. Especially this touchless interface without extra attachments gives user both easier usage and spatial convenience. For hand motion recognition, two hand-motion's parameters such as a slope and a velocity were measured as a direction-based recognition way. And extraction of hand area image utilizing YCbCr color model and several image processing methods were adopted to recognize a hand shape recognition. These recognition methods are combined to generate various commands, such as, next-page, previous-page, screen-up, screen-down and mouse -click in oder to control DID system. Finally, experimental results showed the performance of 93% command recognition rate which is enough to confirm the possible application to commercial products.

A Method of Hand Recognition for Virtual Hand Control of Virtual Reality Game Environment (가상 현실 게임 환경에서의 가상 손 제어를 위한 사용자 손 인식 방법)

  • Kim, Boo-Nyon;Kim, Jong-Ho;Kim, Tae-Young
    • Journal of Korea Game Society
    • /
    • v.10 no.2
    • /
    • pp.49-56
    • /
    • 2010
  • In this paper, we propose a control method of virtual hand by the recognition of a user's hand in the virtual reality game environment. We display virtual hand on the game screen after getting the information of the user's hand movement and the direction thru input images by camera. We can utilize the movement of a user's hand as an input interface for virtual hand to select and move the object. As a hand recognition method based on the vision technology, the proposed method transforms input image from RGB color space to HSV color space, then segments the hand area using double threshold of H, S value and connected component analysis. Next, The center of gravity of the hand area can be calculated by 0 and 1 moment implementation of the segmented area. Since the center of gravity is positioned onto the center of the hand, the further apart pixels from the center of the gravity among the pixels in the segmented image can be recognized as fingertips. Finally, the axis of the hand is obtained as the vector of the center of gravity and the fingertips. In order to increase recognition stability and performance the method using a history buffer and a bounding box is also shown. The experiments on various input images show that our hand recognition method provides high level of accuracy and relatively fast stable results.

Design and Implementation of a Stereoscopic Image Control System based on User Hand Gesture Recognition (사용자 손 제스처 인식 기반 입체 영상 제어 시스템 설계 및 구현)

  • Song, Bok Deuk;Lee, Seung-Hwan;Choi, HongKyw;Kim, Sung-Hoon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.3
    • /
    • pp.396-402
    • /
    • 2022
  • User interactions are being developed in various forms, and in particular, interactions using human gestures are being actively studied. Among them, hand gesture recognition is used as a human interface in the field of realistic media based on the 3D Hand Model. The use of interfaces based on hand gesture recognition helps users access media media more easily and conveniently. User interaction using hand gesture recognition should be able to view images by applying fast and accurate hand gesture recognition technology without restrictions on the computer environment. This paper developed a fast and accurate user hand gesture recognition algorithm using the open source media pipe framework and machine learning's k-NN (K-Nearest Neighbor). In addition, in order to minimize the restriction of the computer environment, a stereoscopic image control system based on user hand gesture recognition was designed and implemented using a web service environment capable of Internet service and a docker container, a virtual environment.

Face Detection-based Hand Gesture Recognition in Color and Depth Images (색상 및 거리 영상에서의 얼굴검출 기반 손 제스처 인식)

  • Jeon, Hun-Ki;Ko, Jaepil
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2012.11a
    • /
    • pp.580-582
    • /
    • 2012
  • 본 논문에서는 얼굴검출을 통한 실시간 피부색 모델링과 거리정보를 결합하여 손 영역을 검출하고 손 움직임에 따른 방향 및 원 제스처 인식을 위한 규칙 기반 인식방법을 제안한다. 기존과는 달리 손좌표를 사용하는 대신 기존 프레임과 현재 프레임에서의 손 좌표 차이를 이용하여 제스처 구간을 설정하고 자연스러운 제스처 동작에서의 속도변화를 고려할 수 있도록 한다. 실험 데이터는 5명을 대상으로 4방향과 원을 포함하여 총 5가지 제스처를 10회씩 실행하여 획득하였다. 이들 데이터에 대한 인식 실험에서 97%의 인식률을 보였다.

A Hand-Gesture Recognition Using Structural Information of Hand (손의 구조적 정보를 이용한 지문자 영상의 인식)

  • 최성현;양윤모
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2001.04b
    • /
    • pp.502-504
    • /
    • 2001
  • 본 논문은 동작자가 표현하는 수화 동작 중, 지문자 영상을 2차원 영상처리에 의하여 이식하는 방법을 제안한다. 손에 개인에 따라 변량이 존재하고 같은 동작을 표현하더라도 항상 일정하지 않기 때문에, 지문자 인식을 위하여 손의 구조적인 정보를 이용한다. 추출된 손 영역을 입력으로 하여 손의 외곽 정보를 이용한 MAT(Medial Axis Transform)를 수행한다. 여기에서 나온 골격의 변화에 따라 각각 손가락에 해당하는 2번 쇽(Shock)과 손 바닥에 해당하는 4번 쇽의 관계에 의하여 지문자를 인식한다. 이러한 구조적인 정보를 이용함으로써 개인에 의해 나타나는 표현의 차이를 제거할 수 있고, MAT를 이용할 때 나타나는 외곽영역의 잡음에 의한 구조 변화에도 안정적으로 대처할 수 있다. 제안한 알고리즘을 이용하여 31개의 단순 지문자에 대한 270개의 입력영상을 실험한 결과 81.1%, 모양이 흡사한 지문자를 통합하여 26개의 지문자로 식별할 경우에는 91.1%의 인식율을 나타내었다.

  • PDF

Real-Time Hand Gesture Tracking & Recognition (실시간 핸드 제스처 추적 및 인식)

  • Ha, Jeong-Yo;Kim, Gye-Young;Choi, Hyung-Il
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2010.07a
    • /
    • pp.141-144
    • /
    • 2010
  • 본 논문에서는 컴퓨터 비전에 기반을 둔 방법으로 실시간으로 사람의 손의 모양을 인식하는 알고리즘을 제안한다. 기본적인 전처리 과정과 피부 값의 검출을 통해서 사용자의 피부색상을 검출한 후 팔 영역과 얼굴영역을 제거하고, 손 영역만 검출한 뒤 손의 무게중심을 구한다. 그 후에 손의 궤적을 추적하기 위해 칼만필터를 이용하였으며, 손의 모양을 인식하기 위한 방법으로 Hidden Markov Model을 이용하여 사용자의 손 모양 6가지를 학습한 후 인식하였다. 실험을 통하여 제안한 방법의 효과를 입증하였다.

  • PDF

Vision-based 3D Hand Gesture Recognition for Human-Robot Interaction (휴먼-로봇 상호작용을 위한 비전 기반3차원 손 제스처 인식)

  • Roh, Myung-Cheol;Chang, Hye-Min;Kang, Seung-Yeon;Lee, Seong-Whan
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2006.10b
    • /
    • pp.421-425
    • /
    • 2006
  • 최근 들어서 휴머노이드 로봇을 비롯한 로봇에 대하여 관심이 증대되고 있다. 이에 따라, 외모를 닮은 로봇 뿐 만 아니라, 사람과 상호 작용을 할 수 있는 로봇 기술의 중요성이 부각되고 있다. 이러한 상호 작용을 위한 효율적이고, 가장 자연스러운 방법 중의 하나가 비전을 기반으로 한 제스처 인식이다. 제스처를 인식하는데 있어서 가장 중요한 것은 손의 모양과 움직임을 인식하는3차원 제스처 인식이다. 본 논문에서는 3차원 손 제스처를 인식하기 위하여3차원 손 모델 추정 방법과 명령형 제스처 인식 시스템을 소개하고, 수화, 지화 등으로의 확장성을 위한 프레임워크를 제안한다.

  • PDF

A Study on Hand Region Detection for Kinect-Based Hand Shape Recognition (Kinect 기반 손 모양 인식을 위한 손 영역 검출에 관한 연구)

  • Park, Hanhoon;Choi, Junyeong;Park, Jong-Il;Moon, Kwang-Seok
    • Journal of Broadcast Engineering
    • /
    • v.18 no.3
    • /
    • pp.393-400
    • /
    • 2013
  • Hand shape recognition is a fundamental technique for implementing natural human-computer interaction. In this paper, we discuss a method for effectively detecting a hand region in Kinect-based hand shape recognition. Since Kinect is a camera that can capture color images and infrared images (or depth images) together, both images can be exploited for the process of detecting a hand region. That is, a hand region can be detected by finding pixels having skin colors or by finding pixels having a specific depth. Therefore, after analyzing the performance of each, we need a method of properly combining both to clearly extract the silhouette of hand region. This is because the hand shape recognition rate depends on the fineness of detected silhouette. Finally, through comparison of hand shape recognition rates resulted from different hand region detection methods in general environments, we propose a high-performance hand region detection method.

Real-time Handwriting Recognizer based on Partial Learning Applicable to Embedded Devices (임베디드 디바이스에 적용 가능한 부분학습 기반의 실시간 손글씨 인식기)

  • Kim, Young-Joo;Kim, Taeho
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.5
    • /
    • pp.591-599
    • /
    • 2020
  • Deep learning is widely utilized to classify or recognize objects of real-world. An abundance of data is trained on high-performance computers and a trained model is generated, and then the model is loaded in an inferencer. The inferencer is used in various environments, so that it may cause unrecognized objects or low-accuracy objects. To solve this problem, real-world objects are collected and they are trained periodically. However, not only is it difficult to immediately improve the recognition rate, but is not easy to learn an inferencer on embedded devices. We propose a real-time handwriting recognizer based on partial learning on embedded devices. The recognizer provides a training environment which partially learn on embedded devices at every user request, and its trained model is updated in real time. As this can improve intelligence of the recognizer automatically, recognition rate of unrecognized handwriting increases. We experimentally prove that learning and reasoning are possible for 22 numbers and letters on RK3399 devices.

A study on hand gesture recognition using 3D hand feature (3차원 손 특징을 이용한 손 동작 인식에 관한 연구)

  • Bae Cheol-Soo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.10 no.4
    • /
    • pp.674-679
    • /
    • 2006
  • In this paper a gesture recognition system using 3D feature data is described. The system relies on a novel 3D sensor that generates a dense range mage of the scene. The main novelty of the proposed system, with respect to other 3D gesture recognition techniques, is the capability for robust recognition of complex hand postures such as those encountered in sign language alphabets. This is achieved by explicitly employing 3D hand features. Moreover, the proposed approach does not rely on colour information, and guarantees robust segmentation of the hand under various illumination conditions, and content of the scene. Several novel 3D image analysis algorithms are presented covering the complete processing chain: 3D image acquisition, arm segmentation, hand -forearm segmentation, hand pose estimation, 3D feature extraction, and gesture classification. The proposed system is tested in an application scenario involving the recognition of sign-language postures.