• Title/Summary/Keyword: Pointing region estimation

Search Result 7, Processing Time 0.025 seconds

Design and Implementation of a Real-time Region Pointing System using Arm-Pointing Gesture Interface in a 3D Environment

  • Han, Yun-Sang;Seo, Yung-Ho;Doo, Kyoung-Soo;Choi, Jong-Soo
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.290-293
    • /
    • 2009
  • In this paper, we propose a method to estimate pointing region in real-world from images of cameras. In general, arm-pointing gesture encodes a direction which extends from user's fingertip to target point. In the proposed work, we assume that the pointing ray can be approximated to a straight line which passes through user's face and fingertip. Therefore, the proposed method extracts two end points for the estimation of pointing direction; one from the user's face and another from the user's fingertip region. Then, the pointing direction and its target region are estimated based on the 2D-3D projective mapping between camera images and real-world scene. In order to demonstrate an application of the proposed method, we constructed an ICGS (interactive cinema guiding system) which employs two CCD cameras and a monitor. The accuracy and robustness of the proposed method are also verified on the experimental results of several real video sequences.

  • PDF

Real-time Implementation and Application of Pointing Region Estimation System using 3D Geometric Information in Real World (실세계 3차원 기하학 정보를 이용한 실시간 지시영역 추정 시스템의 구현 및 응용)

  • Han, Yun-Sang;Seo, Yung-Ho;Doo, Kyoung-Soo;Kim, Jin-Tae;Choi, Jong-Soo
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.45 no.2
    • /
    • pp.29-36
    • /
    • 2008
  • In this paper we propose a real-time method to estimate a pointing region from two camera images. In general, a pointing target exists in the face direction when a human points to something. Therefore, we regard the direction of pointing as the straight line that connects the face position with the fingertip position. First, the method extracts two points in the face and the fingertips region by using detecting the skin color of human being. And we used the 3D geometric information to obtain a pointing detection and its region. In order to evaluate the performance, we have build up an ICIGS(Interactive Cinema Information Guiding System) with two camera and a beam project.

Real-World Pointing Region Estimation Using 3D Geometry Information (3차원 기하학 정보를 이용한 실세계 지시 영역 추정)

  • Han, Yun-Sang;Seo, Yung-Ho;Doo, Kyoung-Soo;Choi, Jong-Soo
    • Proceedings of the IEEK Conference
    • /
    • 2007.07a
    • /
    • pp.353-354
    • /
    • 2007
  • This paper proposes the method which estimates the pointing region at the real world. This paper uses the technique to easily calibrate a camera of Z. Zhang. First, we calculate the projection matrix of each camera by the technique. Next, we estimate the location of the shoulder and the fingertip. Then we compute the pointing region in 3D real world by using projection matrix of each camera. Experiment result showed that the error between estimated point and the plane center point is less than 5cm.

  • PDF

Segmentation of Pointed Objects for Service Robots (서비스 로봇을 위한 지시 물체 분할 방법)

  • Kim, Hyung-O;Kim, Soo-Hwan;Kim, Dong-Hwan;Park, Sung-Kee
    • The Journal of Korea Robotics Society
    • /
    • v.4 no.2
    • /
    • pp.139-146
    • /
    • 2009
  • This paper describes how a person extracts a unknown object with pointing gesture while interacting with a robot. Using a stereo vision sensor, our proposed method consists of two stages: the detection of the operators' face, the estimation of the pointing direction, and the extraction of the pointed object. The operator's face is recognized by using the Haar-like features. And then we estimate the 3D pointing direction from the shoulder-to-hand line. Finally, we segment an unknown object from 3D point clouds in estimated region of interest. On the basis of this proposed method, we implemented an object registration system with our mobile robot and obtained reliable experimental results.

  • PDF

Robust Estimation of Camera Motion Using A Local Phase Based Affine Model (국소적 위상기반 어파인 모델을 이용한 강인한 카메라 움직임 추정)

  • Jang, Suk-Yoon;Yoon, Chang-Yong;Park, Mig-Non
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.46 no.1
    • /
    • pp.128-135
    • /
    • 2009
  • Techniques for tracking the same region of physical space with the temporal sequences of images by matching the contours of constant phase show robust and stable performance in relative to the tracking techniques using or assuming the constant intensity. Using this property, we describe an algorithm for obtaining the robust motion parameters caused by the global camera motion. First, we obtain the optical flow based on the phase of spacially filtered sequential images on the region in a direction orthogonal to orientation of each component of gabor filter bank. And then, we apply the least squares method to the optical flow to determine the affine motion parameters. We demonstrate hat proposed method can be applied to the vision based pointing device which estimate its motion using the image including the display device which cause lighting condition varieties and noise.

Efficient Object Selection Algorithm by Detection of Human Activity (행동 탐지 기반의 효율적인 객체 선택 알고리듬)

  • Park, Wang-Bae;Seo, Yung-Ho;Doo, Kyoung-Soo;Choi, Jong-Soo
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.3
    • /
    • pp.61-69
    • /
    • 2010
  • This paper presents an efficient object selection algorithm by analyzing and detecting of human activity. Generally, when people point any something, they will put a face on the target direction. Therefore, the direction of the face and fingers and was ordered to be connected to a straight line. At first, in order to detect the moving objects from the input frames, we extract the interesting objects in real time using background subtraction. And the judgment of movement is determined by Principal Component Analysis and a designated time period. When user is motionless, we estimate the user's indication by estimation in relation to vector from the head to the hand. Through experiments using the multiple views, we confirm that the proposed algorithm can estimate the movement and indication of user more efficiently.

Technology Development for Non-Contact Interface of Multi-Region Classifier based on Context-Aware (상황 인식 기반 다중 영역 분류기 비접촉 인터페이스기술 개발)

  • Jin, Songguo;Rhee, Phill-Kyu
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.20 no.6
    • /
    • pp.175-182
    • /
    • 2020
  • The non-contact eye tracking is a nonintrusive human-computer interface providing hands-free communications for people with severe disabilities. Recently. it is expected to do an important role in non-contact systems due to the recent coronavirus COVID-19, etc. This paper proposes a novel approach for an eye mouse using an eye tracking method based on a context-aware based AdaBoost multi-region classifier and ASSL algorithm. The conventional AdaBoost algorithm, however, cannot provide sufficiently reliable performance in face tracking for eye cursor pointing estimation, because it cannot take advantage of the spatial context relations among facial features. Therefore, we propose the eye-region context based AdaBoost multiple classifier for the efficient non-contact gaze tracking and mouse implementation. The proposed method detects, tracks, and aggregates various eye features to evaluate the gaze and adjusts active and semi-supervised learning based on the on-screen cursor. The proposed system has been successfully employed in eye location, and it can also be used to detect and track eye features. This system controls the computer cursor along the user's gaze and it was postprocessing by applying Gaussian modeling to prevent shaking during the real-time tracking using Kalman filter. In this system, target objects were randomly generated and the eye tracking performance was analyzed according to the Fits law in real time. It is expected that the utilization of non-contact interfaces.