• Title/Summary/Keyword: Hand Pose Recognition

Search Result 43, Processing Time 0.029 seconds

Hand pose recognition on Table Top Display (테이블 탑 디스플레이 환경에서 손 형상 인식)

  • Kim, Hyung-Kwan;Lee, Yang-Weon;Lee, Chil-Woo
    • Proceedings of the IEEK Conference
    • /
    • 2008.06a
    • /
    • pp.719-720
    • /
    • 2008
  • 마우스나 키보드를 벗어나 직관적인 손을 이용하는 테이블 탑 디스플레이는 대부분 Touch 정보를 이용한다. 직접적인 터치에 손 형상 및 제스처를 이용할 수 있다면 보다 자유롭게 시스템을 컨트롤 할 수 있을 것이다. 본 논문에서는 테이블 탑 디스플레이에서의 손형상 인식을 기술한다.

  • PDF

Photon-counting linear discriminant analysis for face recognition at a distance

  • Yeom, Seok-Won
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.12 no.3
    • /
    • pp.250-255
    • /
    • 2012
  • Face recognition has wide applications in security and surveillance systems as well as in robot vision and machine interfaces. Conventional challenges in face recognition include pose, illumination, and expression, and face recognition at a distance involves additional challenges because long-distance images are often degraded due to poor focusing and motion blurring. This study investigates the effectiveness of applying photon-counting linear discriminant analysis (Pc-LDA) to face recognition in harsh environments. A related technique, Fisher linear discriminant analysis, has been found to be optimal, but it often suffers from the singularity problem because the number of available training images is generally much smaller than the number of pixels. Pc-LDA, on the other hand, realizes the Fisher criterion in high-dimensional space without any dimensionality reduction. Therefore, it provides more invariant solutions to image recognition under distortion and degradation. Two decision rules are employed: one is based on Euclidean distance; the other, on normalized correlation. In the experiments, the asymptotic equivalence of the photon-counting method to the Fisher method is verified with simulated data. Degraded facial images are employed to demonstrate the robustness of the photon-counting classifier in harsh environments. Four types of blurring point spread functions are applied to the test images in order to simulate long-distance acquisition. The results are compared with those of conventional Eigen face and Fisher face methods. The results indicate that Pc-LDA is better than conventional facial recognition techniques.

A Study on User Interface for Quiz Game Contents using Gesture Recognition (제스처인식을 이용한 퀴즈게임 콘텐츠의 사용자 인터페이스에 대한 연구)

  • Ahn, Jung-Ho
    • Journal of Digital Contents Society
    • /
    • v.13 no.1
    • /
    • pp.91-99
    • /
    • 2012
  • In this paper we introduce a quiz application program that digitizes the analogue quiz game. We digitize the quiz components such as quiz proceeding, participants recognition, problem presentation, volunteer recognition who raises his hand first, answer judgement, score addition, winner decision, etc, which are manually performed in the normal quiz game. For automation, we obtained the depth images from the kinect camera which comes into the spotlight recently, so that we located the quiz participants and recognized the user-friendly defined gestures. Analyzing the depth distribution, we detected and segmented the upper body parts and located the hands' areas. Also, we extracted hand features and designed the decision function that classified the hand pose into palm, fist or else, so that a participant can select the example that he wants among presented examples. The implemented quiz application program was tested in real time and showed very satisfactory gesture recognition results.

Development of a Hand Pose Rally System Based on Image Processing

  • Suganuma, Akira;Nishi, Koki
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.4 no.5
    • /
    • pp.340-348
    • /
    • 2015
  • The "stamp rally" is an event that participants go the round with predetermined points for the purpose of collecting stamps. They bring the stamp card to these points. They, however, sometimes leave or lose the card. In this case, they may not reach the final destination of the stamp rally. The purpose of this research is the construction of the stamp rally system which distinguishes each participant with his or her hand instead of the stamp card. We have realized our method distinguishing a hand posture by the image processing. We have also evaluated it by 30 examinees. Furthermore, we have designed the data communication between the server and the checkpoint to implement our whole system. We have also designed and implemented the process for the registering participant, the passing checkpoint and the administration.

Depth Image based Egocentric 3D Hand Pose Recognition for VR Using Mobile Deep Residual Network (모바일 Deep Residual Network을 이용한 뎁스 영상 기반 1 인칭 시점 VR 손동작 인식)

  • Park, Hye Min;Park, Na Hyeon;Oh, Ji Heon;Lee, Cheol Woo;Choi, Hyoung Woo;Kim, Tae-Seong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2019.10a
    • /
    • pp.1137-1140
    • /
    • 2019
  • 가상현실(Virtual Reality, VR), 증강현실(Augmented Reality, AR), 혼합현실(Mixed Reality, MR) 분야에 유용한 인간 컴퓨터 인터페이스 기술은 필수적이다. 특히 휴먼 손동작 인식 기술은 직관적인 상호작용을 가능하게 하여, 다양한 분야에서 편리한 컨트롤러로 사용할 수 있다. 본 연구에서는 뎁스 영상 기반의 1 인칭 시점 손동작 인식을 위하여 손동작 데이터베이스 생성 시스템을 구축하여, 손동작 인식기 학습에 필요한 1 인칭(Egocentric View Point) 데이터베이스를 촬영하여 제작한다. 그리고 모바일 Head Mounted Device(HMD) VR 을 위한 뎁스 영상 기반 1 인칭 시점 손동작 인식(Hand Pose Recognition, HPR) 딥러닝 Deep Residual Network 를 구현한다. 최종적으로, 안드로이드 모바일 디바이스에 학습된 Residual Network Regressor 를 이식하고 모바일 VR 에 실시간 손동작 인식 시스템을 구동하여, 모바일 VR 상 실시간 3D 손동작 인식을 가상 물체와의 상호작용을 통하여 확인 한다.

Hidden Markov Model for Gesture Recognition (제스처 인식을 위한 은닉 마르코프 모델)

  • Park, Hye-Sun;Kim, Eun-Yi;Kim, Hang-Joon
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.43 no.1 s.307
    • /
    • pp.17-26
    • /
    • 2006
  • This paper proposes a novel hidden Markov model (HMM)-based gesture recognition method and applies it to an HCI to control a computer game. The novelty of the proposed method is two-fold: 1) the proposed method uses a continuous streaming of human motion as the input to the HMM instead of isolated data sequences or pre-segmented sequences of data and 2) the gesture segmentation and recognition are performed simultaneously. The proposed method consists of a single HMM composed of thirteen gesture-specific HMMs that independently recognize certain gestures. It takes a continuous stream of pose symbols as an input, where a pose is composed of coordinates that indicate the face, left hand, and right hand. Whenever a new input Pose arrives, the HMM continuously updates its state probabilities, then recognizes a gesture if the probability of a distinctive state exceeds a predefined threshold. To assess the validity of the proposed method, it was applied to a real game, Quake II, and the results demonstrated that the proposed HMM could provide very useful information to enhance the discrimination between different classes and reduce the computational cost.

Motion Plane Estimation for Real-Time Hand Motion Recognition (실시간 손동작 인식을 위한 동작 평면 추정)

  • Jeong, Seung-Dae;Jang, Kyung-Ho;Jung, Soon-Ki
    • The KIPS Transactions:PartB
    • /
    • v.16B no.5
    • /
    • pp.347-358
    • /
    • 2009
  • In this thesis, we develop a vision based hand motion recognition system using a camera with two rotational motors. Existing systems were implemented using a range camera or multiple cameras and have a limited working area. In contrast, we use an uncalibrated camera and get more wide working area by pan-tilt motion. Given an image sequence provided by the pan-tilt camera, color and pattern information are integrated into a tracking system in order to find the 2D position and direction of the hand. With these pose information, we estimate 3D motion plane on which the gesture motion trajectory from approximately forms. The 3D trajectory of the moving finger tip is projected into the motion plane, so that the resolving power of the linear gesture patterns is enhanced. We have tested the proposed approach in terms of the accuracy of trace angle and the dimension of the working volume.

Effective Hand-Pose Recognition using Multi-Class SVM (다중 클래스 SVM을 이용한 효과적인 손 형태 인식)

  • Byeon, Jae-Hee;Nam, Yun-Young;Choi, Yoo-Joo
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2007.10c
    • /
    • pp.501-504
    • /
    • 2007
  • 본 논문은 다중 클래스 SVM을 이용하여 손 형태를 효과적으로 인식할 수 있는 방법을 제시한다. 컴퓨터의 상호작용 연구가 활발해짐에 따라 컴퓨터가 인간의 행동을 얼마나 정확히 인식할 수 있느냐에 대한 연구는 끊임없이 이루어지고 있다. 본 연구에서는 실시간으로 입력되는 손영상에 대하여 색상(Hue)과 채도(Saturation)를 이용한 컬러모델을 기반으로 조명의 영향을 줄이며 손의 영역을 추출하고, 특히, 팔영역을 포함한 손영역이 촬영된 영상에서 손목 이후 부분을 제외한 손 영역만을 추출하도록 하였다. 손 형태를 인식하기 위하여 손 영역으로부터 손의 특징을 18 개의 특징값으로 표현하였고, 이를 통해 학습된 다중 클래스 SVM을 이용하여 손 형태를 인식하였다.

  • PDF

Hand Pose and Gesture Recognition Using Infrared Sensor (적외선 센서를 사용한 손 동작 인식)

  • Ahn, Joon-young;Lee, Sang-hwa;Cho, Nam-ik
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2016.11a
    • /
    • pp.119-122
    • /
    • 2016
  • 최근 IT기술 영역에서 미래기술로 촉망받는 증강현실(AR)과 가상현실(VR)환경을 구축함에 있어서, 마우스나 키보드 등의 별도 장치 없이 기기에 원하는 동작을 입력 하도록 하는 NUI(Natural User Interface)기술이 각광받고 있다. 또한 NUI를 구현하는데 중요한 기술 중 하나로 손동작 인식 기술, 얼굴 인식 기술 등이 대두되고 있다. 이에 본 논문은 적외선 센서의 일종인 Leapmotion 센서를 사용하여 손동작 인식을 구현하고자 하였다. 첫 번째로 우선 거리변환 행렬을 사용하여 손바닥의 중심을 찾았다. 이후 각각의 손가락을 convex hull 알고리즘을 사용하여 추출한다. 제안한 알고리즘에서는 손가락, 손바닥 부분의optical flow를 구한 후, 두 optical flow의 특성을 사용하여 손의 이동, 정지, 클릭 동작을 구분 할 수 있도록 하였다.

  • PDF

Interaction Intent Analysis of Multiple Persons using Nonverbal Behavior Features (인간의 비언어적 행동 특징을 이용한 다중 사용자의 상호작용 의도 분석)

  • Yun, Sang-Seok;Kim, Munsang;Choi, Mun-Taek;Song, Jae-Bok
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.8
    • /
    • pp.738-744
    • /
    • 2013
  • According to the cognitive science research, the interaction intent of humans can be estimated through an analysis of the representing behaviors. This paper proposes a novel methodology for reliable intention analysis of humans by applying this approach. To identify the intention, 8 behavioral features are extracted from the 4 characteristics in human-human interaction and we outline a set of core components for nonverbal behavior of humans. These nonverbal behaviors are associated with various recognition modules including multimodal sensors which have each modality with localizing sound source of the speaker in the audition part, recognizing frontal face and facial expression in the vision part, and estimating human trajectories, body pose and leaning, and hand gesture in the spatial part. As a post-processing step, temporal confidential reasoning is utilized to improve the recognition performance and integrated human model is utilized to quantitatively classify the intention from multi-dimensional cues by applying the weight factor. Thus, interactive robots can make informed engagement decision to effectively interact with multiple persons. Experimental results show that the proposed scheme works successfully between human users and a robot in human-robot interaction.