• Title/Summary/Keyword: 추적점의 위치

Search Result 338, Processing Time 0.03 seconds

A Gaze Detection Technique Using a Monocular Camera System (단안 카메라 환경에서의 시선 위치 추적)

  • 박강령;김재희
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.26 no.10B
    • /
    • pp.1390-1398
    • /
    • 2001
  • 시선 위치 추적이란 사용자가 모니터 상의 어느 지점을 쳐다보고 있는 지를 파악해 내는 기술이다. 시선 위치를 파악하기 위해 본 논문에서는 2차원 카메라 영상으로부터 얼굴 영역 및 얼굴 특징점을 추출한다. 초기에 모니터상의 3 지점을 쳐다볼 때 얼굴 특징점들은 움직임의 변화를 나타내며, 이로부터 카메라 보정 및 매개변수 추정 방법을 이용하여 얼굴특징점의 3차원 위치를 추정한다. 이후 사용자가 모니터 상의 또 다른 지점을 쳐다볼 때 얼굴 특징점의 변화된 3차원 위치는 3차원 움직임 추정방법 및 아핀변환을 이용하여 구해낸다. 이로부터 변화된 얼굴 특징점 및 이러한 얼굴 특징점으로 구성된 얼굴평면이 구해지며, 이러한 평면의 법선으로부터 모니터 상의 시선위치를 구할 수 있다. 실험 결과 19인치 모니터를 사용하여 모니터와 사용자까지의 거리를 50∼70cm정도 유지하였을 때 약 2.08인치의 시선위치에러 성능을 얻었다. 이 결과는 Rikert의 논문에서 나타낸 시선위치추적 성능(5.08cm 에러)과 비슷한 결과를 나타낸다. 그러나 Rikert의 방법은 모니터와 사용자 얼굴까지의 거리는 항상 고정시켜야 한다는 단점이 있으며, 얼굴의 자연스러운 움직임(회전 및 이동)이 발생하는 경우 시선위치추적 에러가 증가되는 문제점이 있다. 동시에 그들의 방법은 사용자 얼굴의 뒤 배경에 복잡한 물체가 없는 것으로 제한조건을 두고 있으며 처리 시간이 상당히 오래 걸리는 문제점이 있다. 그러나 본 논문에서 제안하는 시선 위치 추적 방법은 배경이 복잡한 사무실 환경에서도 사용가능하며, 약 3초 이내의 처리 시간(200MHz Pentium PC)이 소요됨을 알 수 있었다.

  • PDF

Real-time Position Tracking of Virtual Object using Artificial Landmark (인위적인 랜드마크를 이용한 실시간 가상객체 위치변화 추적)

  • Chung, Hae-Ra;Choi, Yoo-Joo;Kim, Myoung-Hee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2001.04a
    • /
    • pp.135-138
    • /
    • 2001
  • 증강현실 시스템을 구축하는데 있어 실시간 가상객체 위치 추적은 실세계와 가상객체를 정확하고 깊이감 있게 정합하고, 실세계 움직임에 따른 가상객체 위치변화 추적에 중요하다. 따라서 실시간 카메라 입력영상으로부터 가상객체의 위치를 추적하는데 있어 정확성과 함께 빠른 수행시간이 요구된다. 본 논문에서는 HMD(Head Mounted Display)장비에 장착된 두 개의 카메라로부터 관찰자의 시점 이동에 따른 가상객체 정합위치 정보를 입력받아 그 위치를 정확하게 인식하고 빠르게 추적하기 위하여 인위적인 랜드마크 형태를 정의하였으며, 실시간 입력영상으로부터 랜드마크 중심점 위치를 실시간으로 추적하기 위해 일정시간 간격마다 입력받은 첫 영상으로부터 얻은 랜드마크 영역 정보를 이용하여 중심점의 위치를 추적함으로써 수행시간을 줄이고자 하였다.

  • PDF

Real Time Face Tracking Method based Random Regression Forest using Mean Shift (평균이동 기법을 이용한 랜덤포레스트 기반 실시간 얼굴 특징점 추적)

  • Zhang, Xingjie;Park, Jong-Il
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2017.06a
    • /
    • pp.89-90
    • /
    • 2017
  • 본 논문에서는 평균이동 (mean shift) 기법을 이용하여 랜덤포레스트 (random forest) 기반 실시간 얼굴 특징점 추적 (facial features tracking) 방법을 제안한다. 우선, 눈의 위치를 이용하여 검출된 얼굴영역을 적절한 크기와 위치로 개선하여 랜덤포레스트를 이용한 얼굴 특징점 추적 알고리즘이 받는, 얼굴검출 (face detection) 과정에 얻어지는 얼굴영역 상자 (face bounding box) 크기와 위치의 영향을 감소 하였다. 또한 랜덤포레스트의 얼굴 특징점 추정결과에서 추정평균 대신 평균이동기법을 이용하여 잘못된 추정결과들을 제거하고 제대로 된 추정결과만 사용하여 얼굴 특징점 검출 정확도를 개선하였다. 따라서 제안하는 방법들을 이용하여 기존의 랜덤포레스트 기반 얼굴 특징점 검출 기법의 성능을 제고하고 실시간으로 얼굴 특징점을 추적할 수 있다.

  • PDF

Non-Prior Training Active Feature Model-Based Object Tracking for Real-Time Surveillance Systems (실시간 감시 시스템을 위한 사전 무학습 능동 특징점 모델 기반 객체 추적)

  • 김상진;신정호;이성원;백준기
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.5
    • /
    • pp.23-34
    • /
    • 2004
  • In this paper we propose a feature point tracking algorithm using optical flow under non-prior taming active feature model (NPT-AFM). The proposed algorithm mainly focuses on analysis non-rigid objects[1], and provides real-time, robust tracking by NPT-AFM. NPT-AFM algorithm can be divided into two steps: (i) localization of an object-of-interest and (ii) prediction and correction of the object position by utilizing the inter-frame information. The localization step was realized by using a modified Shi-Tomasi's feature tracking algoriam[2] after motion-based segmentation. In the prediction-correction step, given feature points are continuously tracked by using optical flow method[3] and if a feature point cannot be properly tracked, temporal and spatial prediction schemes can be employed for that point until it becomes uncovered again. Feature points inside an object are estimated instead of its shape boundary, and are updated an element of the training set for AFH Experimental results, show that the proposed NPT-AFM-based algerian can robustly track non-rigid objects in real-time.

Gaze Detection System by IR-LED based Camera (적외선 조명 카메라를 이용한 시선 위치 추적 시스템)

  • 박강령
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.4C
    • /
    • pp.494-504
    • /
    • 2004
  • The researches about gaze detection have been much developed with many applications. Most previous researches only rely on image processing algorithm, so they take much processing time and have many constraints. In our work, we implement it with a computer vision system setting a IR-LED based single camera. To detect the gaze position, we locate facial features, which is effectively performed with IR-LED based camera and SVM(Support Vector Machine). When a user gazes at a position of monitor, we can compute the 3D positions of those features based on 3D rotation and translation estimation and affine transform. Finally, the gaze position by the facial movements is computed from the normal vector of the plane determined by those computed 3D positions of features. In addition, we use a trained neural network to detect the gaze position by eye's movement. As experimental results, we can obtain the facial and eye gaze position on a monitor and the gaze position accuracy between the computed positions and the real ones is about 4.2 cm of RMS error.

Feature point extraction using scale-space filtering and Tracking algorithm based on comparing texturedness similarity (스케일-스페이스 필터링을 통한 특징점 추출 및 질감도 비교를 적용한 추적 알고리즘)

  • Park, Yong-Hee;Kwon, Oh-Seok
    • Journal of Internet Computing and Services
    • /
    • v.6 no.5
    • /
    • pp.85-95
    • /
    • 2005
  • This study proposes a method of feature point extraction using scale-space filtering and a feature point tracking algorithm based on a texturedness similarity comparison, With well-defined operators one can select a scale parameter for feature point extraction; this affects the selection and localization of the feature points and also the performance of the tracking algorithm. This study suggests a feature extraction method using scale-space filtering, With a change in the camera's point of view or movement of an object in sequential images, the window of a feature point will have an affine transform. Traditionally, it is difficult to measure the similarity between correspondence points, and tracking errors often occur. This study also suggests a tracking algorithm that expands Shi-Tomasi-Kanade's tracking algorithm with texturedness similarity.

  • PDF

Tracking Algorithm For Golf Swing Using the Information of Pixels and Movements (화소 및 이동 정보를 이용한 골프 스윙 궤도 추적 알고리즘)

  • Lee, Hong, Ro;Hwang, Chi-Jung
    • The KIPS Transactions:PartB
    • /
    • v.12B no.5 s.101
    • /
    • pp.561-566
    • /
    • 2005
  • This paper presents a visual tracking algorithm for the golf swing motion analysis by using the information of the pixels of video frames and movement of the golf club to solve the problem fixed center point in model based tracking method. The model based tracking method use the polynomial function for trajectory displaying of upswing and downswing. Therefore it is under the hypothesis of the no movement of the center of gravity so this method is not for the amateurs. we proposed method using the information of pixel and movement, we first detected the motion by using the information of pixel in the frames in golf swing motion. Then we extracted the club head and hand by a properties of club shaft that consist of the parallel line and the moved location of club in up-swing and down-swing. In addition, we can extract the center point of user by tracking center point of the line between center of head and both foots. And we made an experiment with data that movement of center point is big. Finally, we can track the real trajectory of club head, hand and center point by using proposed tracking algorithm.

Gaze Detection Based on Facial Features and Linear Interpolation on Mobile Devices (모바일 기기에서의 얼굴 특징점 및 선형 보간법 기반 시선 추적)

  • Ko, You-Jin;Park, Kang-Ryoung
    • Journal of Korea Multimedia Society
    • /
    • v.12 no.8
    • /
    • pp.1089-1098
    • /
    • 2009
  • Recently, many researches of making more comfortable input device based on gaze detection technology have been performed in human computer interface. Previous researches were performed on the computer environment with a large sized monitor. With recent increase of using mobile device, the necessities of interfacing by gaze detection on mobile environment were also increased. In this paper, we research about the gaze detection method by using UMPC (Ultra-Mobile PC) and an embedded camera of UMPC based on face and facial feature detection by AAM (Active Appearance Model). This paper has following three originalities. First, different from previous research, we propose a method for tracking user's gaze position in mobile device which has a small sized screen. Second, in order to detect facial feature points, we use AAM. Third, gaze detection accuracy is not degraded according to Z distance based on the normalization of input features by using the features which are obtained in an initial user calibration stage. Experimental results showed that gaze detection error was 1.77 degrees and it was reduced by mouse dragging based on the additional facial movement.

  • PDF

Estimation of CyberKnife Respiratory Tracking System Using Moving Phantom (동적 팬톰을 이용한 사이버나이프 호흡동기 추적장치의 위치 정확성 평가)

  • Seo, Jae-Hyuk;Kang, Young-Nam;Jang, Ji-Sun;Shin, Hun-Joo;Jung, Ji-Young;Choi, Byong-Ock;Choi, Ihl-Bohng;Lee, Dong-Joon;Kwon, Soo-Il;Lim, Jong-Soo
    • Progress in Medical Physics
    • /
    • v.20 no.4
    • /
    • pp.324-330
    • /
    • 2009
  • In this study, we evaluated accuracy and usefulness of CyberKnife Respiratory Tracking System ($Synchrony^{TM}$, Accuray, USA) about a moving during stereotactic radiosurgery. For this study, we used moving phantom that can move the target. We also used Respiratory Tracking System called Synchrony of the Cyberknife in order to track the moving target. For treatment planning of the moving target, we obtained an image using 4D-CT. To measure dose distribution and point dose at the moving target, ion chamber (0.62 cc) and gafchromic EBT film were used. We compared dose distribution (80% isodose line of prescription dose) of static target to that of moving target in order to evaluate the accuracy of Respiratory Tracking System. We also measured the point dose at the target. The mean difference of synchronization for TLS (target localization system) and Synchrony were $11.5{\pm}3.09\;mm$ for desynchronization and $0.14{\pm}0.08\;mm$ for synchronization. The mean difference between static target plan and moving target plan using 4D CT images was $0.18{\pm}0.06\;mm$. And, the accuracy of Respiratory Tracking System was less 1 mm. Estimation of usefulness in Respiratory Tracking System was $17.39{\pm}0.14\;mm$ for inactivity and $1.37{\pm}0.11\;mm$ for activity. The mean difference of absolute dose was $0.68{\pm}0.38%$ in static target and $1.31{\pm}0.81%$ in moving target. As a conclusion, when we treat about the moving target, we consider that it is important to use 4D-CT and the Respiratory Tracking System. In this study, we confirmed the accuracy and usefulness of Respiratory Tracking System in the Cyberknife.

  • PDF

Gaze Detection by Computing Facial and Eye Movement (얼굴 및 눈동자 움직임에 의한 시선 위치 추적)

  • 박강령
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.2
    • /
    • pp.79-88
    • /
    • 2004
  • Gaze detection is to locate the position on a monitor screen where a user is looking by computer vision. Gaze detection systems have numerous fields of application. They are applicable to the man-machine interface for helping the handicapped to use computers and the view control in three dimensional simulation programs. In our work, we implement it with a computer vision system setting a IR-LED based single camera. To detect the gaze position, we locate facial features, which is effectively performed with IR-LED based camera and SVM(Support Vector Machine). When a user gazes at a position of monitor, we can compute the 3D positions of those features based on 3D rotation and translation estimation and affine transform. Finally, the gaze position by the facial movements is computed from the normal vector of the plane determined by those computed 3D positions of features. In addition, we use a trained neural network to detect the gaze position by eye's movement. As experimental results, we can obtain the facial and eye gaze position on a monitor and the gaze position accuracy between the computed positions and the real ones is about 4.8 cm of RMS error.