• Title/Summary/Keyword: 손 특징 추출

Search Result 125, Processing Time 0.031 seconds

Robot Control using Vision based Hand Gesture Recognition (비전기반 손 제스처 인식을 통한 로봇 컨트롤)

  • Kim, Dae-Soo;Kang, Hang-Bong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2007.11a
    • /
    • pp.197-200
    • /
    • 2007
  • 본 논문에서는 로봇 컨트롤 시스템을 위해 입력 받은 영상부터 몇 가지의 손 제스처를 인식하는 비전기반 손 제스처 인식방법을 제안한다. 로봇으로부터 입력 받은 이미지는 로봇의 위치, 주변환경, 조명 등 여러 요인에 따라 다양하게 존재한다. 본 논문은 다양한 환경에서 입력되는 영상으로부터 시스템이 로봇 컨트롤을 위해 미리 지정한 몇 가지 제스처를 인식하도록 한다. 먼저 이미지 조명 변화에 강한 손 제스처 인식을 위하여 레티넥스 이미지 정규화를 적용한 후, YCrCb 공간 상에서 입력된 영상에서 손 영역을 검출 후 위치를 추정한다. 인식된 손 영역에서 특징벡터를 추출함으로서 입력 영상내의 존재할 수 있는 손의 크기나 손의 회전각도 등에 상관없이 필요로 하는 제스처를 인식하도록 한다. 제안된 제스처 인식 결과는 로봇컨트롤을 위한 기존의 제스처인식과 비교하여 성능을 측정하였다.

  • PDF

Robust Hand Tracking Using Kalman Filter and Feature Point (칼만 필터와 특징 정보를 이용한 손 움직임 추정 개선)

  • Seo, Bo-Kyung;Lee, Jang-Hee;Yoo, Suk-I.
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2010.06c
    • /
    • pp.516-520
    • /
    • 2010
  • 컴퓨터와 인간과의 상호작용에서 다양한 형태의 인터페이스에 대한 요구가 날로 커지고 있다. 그 가운데 실생활에서도 사물을 지칭하거나 의사소통의 수단으로 사용되는 손과 관련한 인터페이스에 대한 연구가 주목 받고 있다. 기존의 대부분의 연구들은 손을 입력 받으면 영상을 기반으로 손의 중심점을 찾아 그것의 위치를 인식하였는데 이는 물체에 의해 손이 가려진 것과 같이 잘못된 영상을 입력 받았을 때 원하는 결과를 얻지 못하는 상황을 야기할 수 있다. 본 논문은 이러한 점을 보완하기 위하여 손의 중심점을 찾을 때 방해 받는 물체에 덜 민감하게 반응하도록 칼만 필터를 적용하여 문제점을 개선할 수 있도록 하였다. 또한 결과의 정확도를 높일 수 있도록 손가락 끝점을 추출하여 칼만 필터의 매개변수에 반영시켜주었다. 그 결과 예기치 못한 상황이 발생했을 때에도 이것에 덜 민감하게 반응하면서 손의 위치를 비교적 정확하게 측정할 수 있었으며 시스템의 과정이 간단하여 실시간으로 응용하기에 적합한 것을 알 수 있었다.

  • PDF

Face Tracking Using Face Feature and Color Information (색상과 얼굴 특징 정보를 이용한 얼굴 추적)

  • Lee, Kyong-Ho
    • Journal of the Korea Society of Computer and Information
    • /
    • v.18 no.11
    • /
    • pp.167-174
    • /
    • 2013
  • TIn this paper, we find the face in color images and the ability to track the face was implemented. Face tracking is the work to find face regions in the image using the functions of the computer system and this function is a necessary for the robot. But such as extracting skin color in the image face tracking can not be performed. Because face in image varies according to the condition such as light conditions, facial expressions condition. In this paper, we use the skin color pixel extraction function added lighting compensation function and the entire processing system was implemented, include performing finding the features of eyes, nose, mouth are confirmed as face. Lighting compensation function is a adjusted sine function and although the result is not suitable for human vision, the function showed about 4% improvement. Face features are detected by amplifying, reducing the value and make a comparison between the represented image. The eye and nose position, lips are detected. Face tracking efficiency was good.

3D Data Dimension Reduction for Efficient Feature Extraction in Posture Recognition (포즈 인식에서 효율적 특징 추출을 위한 3차원 데이터의 차원 축소)

  • Kyoung, Dong-Wuk;Lee, Yun-Li;Jung, Kee-Chul
    • The KIPS Transactions:PartB
    • /
    • v.15B no.5
    • /
    • pp.435-448
    • /
    • 2008
  • 3D posture recognition is a solution to overcome the limitation of 2D posture recognition. There are many researches carried out for 3D posture recognition using 3D data. The 3D data consist of massive surface points which are rich of information. However, it is difficult to extract the important features for posture recognition purpose. Meanwhile, it also consumes lots of processing time. In this paper, we introduced a dimension reduction method that transform 3D surface points of an object to 2D data representation in order to overcome the issues of feature extraction and time complexity of 3D posture recognition. For a better feature extraction and matching process, a cylindrical boundary is introduced in meshless parameterization, its offer a fast processing speed of dimension reduction process and the output result is applicable for recognition purpose. The proposed approach is applied to hand and human posture recognition in order to verify the efficiency of the feature extraction.

A Study on the Extraction of Nail's Region from PC-based Hand-Geometry Recognition System Using GA (GA를 이용한 PC 기반 Hand-Geometry 인식시스템의 Nail 영역 추출에 관한 연구)

  • Kim, Young-Tak;Kim, Soo-Jong;Park, Ju-Won;Lee, Sang-Bae
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.14 no.4
    • /
    • pp.506-511
    • /
    • 2004
  • Biometrics is getting more and more attention in recent years for security and other concerns. So far, only fingerprint recognition has seen limited success for on-line security check, since other biometrics verification and identification systems require more complicated and expensive acquisition interfaces and recognition processes. Hand-Geometry has been used for biometric verification and identification because of its acquisition convenience and good performance for verification and identification performance. Hence, it can be a good candidate for online checks. Therefore, this paper proposes a Hand-Geometry recognition system based on geometrical features of hand. From anatomical point of view, human hand can be characterized by its length, width, thickness, geometrical composition, shapes of the palm, and shape and geometry of the fingers. This paper proposes thirty relevant features for a Hand-Geometry recognition system. However, during experimentation, it was discovered that length measured from the tip of the finger was not a reliable feature. Hence, we propose a new technique based on Genetic Algorithm for extraction of the center of nail bottom, in order to use it for the length feature.

Real-Time Hand Pose Tracking and Finger Action Recognition Based on 3D Hand Modeling (3차원 손 모델링 기반의 실시간 손 포즈 추적 및 손가락 동작 인식)

  • Suk, Heung-Il;Lee, Ji-Hong;Lee, Seong-Whan
    • Journal of KIISE:Software and Applications
    • /
    • v.35 no.12
    • /
    • pp.780-788
    • /
    • 2008
  • Modeling hand poses and tracking its movement are one of the challenging problems in computer vision. There are two typical approaches for the reconstruction of hand poses in 3D, depending on the number of cameras from which images are captured. One is to capture images from multiple cameras or a stereo camera. The other is to capture images from a single camera. The former approach is relatively limited, because of the environmental constraints for setting up multiple cameras. In this paper we propose a method of reconstructing 3D hand poses from a 2D input image sequence captured from a single camera by means of Belief Propagation in a graphical model and recognizing a finger clicking motion using a hidden Markov model. We define a graphical model with hidden nodes representing joints of a hand, and observable nodes with the features extracted from a 2D input image sequence. To track hand poses in 3D, we use a Belief Propagation algorithm, which provides a robust and unified framework for inference in a graphical model. From the estimated 3D hand pose we extract the information for each finger's motion, which is then fed into a hidden Markov model. To recognize natural finger actions, we consider the movements of all the fingers to recognize a single finger's action. We applied the proposed method to a virtual keypad system and the result showed a high recognition rate of 94.66% with 300 test data.

Recognition of Finger Language Using FCM Algorithm (FCM 알고리즘을 이용한 지화 인식)

  • Kim, Kwang-Baek;Woo, Young-Woon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.12 no.6
    • /
    • pp.1101-1106
    • /
    • 2008
  • People who have hearing difficulties suffer from satisfactory mutual interaction with normal people because there are little chances of communicating each other. It is caused by rare communication of people who have hearing difficulties with normal people because majority of normal people can not understand sing language that is represented by gestures and is used by people who have hearing difficulties as a principal way of communication. In this paper, we propose a recognition method of finger language using FCM algorithm in order to be possible of communication of people who have hearing difficulties with normal people. In the proposed method, skin regions are extracted from images acquired by a camera using YCbCr and HSI color spaces and then locations of two hands are traced by applying 4-directional edge tracking algorithm on the extracted skin lesions. Final hand regions are extracted from the traced hand regions by noise removal using morphological information. The extracted final hand regions are classified and recognized by FCM algorithm. In the experiment using images of finger language acquired by a camera, we verified that the proposed method have the effect of extracting two hand regions and recognizing finger language.

A Robust Hand Recognition Method to Variations in Lighting (조명 변화에 안정적인 손 형태 인지 기술)

  • Choi, Yoo-Joo;Lee, Je-Sung;You, Hyo-Sun;Lee, Jung-Won;Cho, We-Duke
    • The KIPS Transactions:PartB
    • /
    • v.15B no.1
    • /
    • pp.25-36
    • /
    • 2008
  • In this paper, we present a robust hand recognition approach to sudden illumination changes. The proposed approach constructs a background model with respect to hue and hue gradient in HSI color space and extracts a foreground hand region from an input image using the background subtraction method. Eighteen features are defined for a hand pose and multi-class SVM(Support Vector Machine) approach is applied to learn and classify hand poses based on eighteen features. The proposed approach robustly extracts the contour of a hand with variations in illumination by applying the hue gradient into the background subtraction. A hand pose is defined by two Eigen values which are normalized by the size of OBB(Object-Oriented Bounding Box), and sixteen feature values which represent the number of hand contour points included in each subrange of OBB. We compared the RGB-based background subtraction, hue-based background subtraction and the proposed approach with sudden illumination changes and proved the robustness of the proposed approach. In the experiment, we built a hand pose training model from 2,700 sample hand images of six subjects which represent nine numerical numbers from one to nine. Our implementation result shows 92.6% of successful recognition rate for 1,620 hand images with various lighting condition using the training model.

3D On-line Handwriting Character Recognition System for Wearable Devices (웨어러블 장치를 위한 3D 온라인 필기인식 시스템)

  • Kim, Minji;Choi, Lynn
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2014.11a
    • /
    • pp.1100-1103
    • /
    • 2014
  • 본 논문에서는 웨어러블 장치에서 펜 형태 또는 손가락 부착 형태의 입력 인터페이스로 사용할 수 있는 3D 온라인 필기인식 시스템을 제안한다. 3 축 가속도 센서와 자이로 센서를 장착한 입력 인터페이스를 사용하여 사용자는 손의 움직임을 통해 웨어러블 기기 또는 스마트 기기에 문자를 입력할 수 있다. 본 연구에서 제안하는 3D 필기인식 시스템은 필기 경로를 복원하여 획을 추출하고, 3 차원 공간의 필기문자에서 나타나는 기울임이나 왜곡, 겹쳐 쓰기를 고려한 특징점 추출 과정을 거친다. 추출한 특징점을 2 단계 결정 트리의 입력으로 사용하여 사용자가 공간상에서 필기한 알파벳을 인식한다. 10 명의 사용자에게 3 회의 필기 데이터를 입력 받아 총 780 개의 문자를 인식한 결과, 87.69%의 인식률을 얻을 수 있었다.

Automatic Hand Tracking System using Skin Color Histogram (피부색 히스토그램 검출을 통해 향상된 자동 손 추적 시스템)

  • Kim, Beom-Joon;Shin, Byeong-Seok
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2015.10a
    • /
    • pp.1477-1479
    • /
    • 2015
  • 기존의 연구와 같이 정확한 피부색 영역을 추출하기 위해 색상공간을 조절하는 방식은 조명이나 주변환경의 영향에 따라 잘못된 결과를 낼 수 있다. Camshift 알고리즘을 이용한 추적을 할 때에도 대상에게 맞춰진 피부색 히스토그램을 이용해서 추적하지 않으므로 범용성이 떨어진다. 이러한 문제점을 해결하기 위해 Camshift 알고리즘의 최초추적 윈도우를 결정하고 히스토그램을 결정하여손 피부색 추적성능을 향상시켰다. 보편적인 피부색 필터를 이용하여 인체 전경을 추출하고, haar like feature detection (특징검출)을 이용하여 손 영역을 검색한다. 이후 피부색 필터를 통해 이진화 된 이미지를 이용해 원 영상을 마스킹 한 후 사용자 고유의 피부색의 히스토그램을 결정한다. 이 방법으로 얻은 히스토그램을 Camshift알고리즘에 적용하면 기존방식 으로 생성한 히스토그램을 사용할 때보다 좋은 추적 성능을 보인다.