• Title/Summary/Keyword: head tracking

Search Result 244, Processing Time 0.026 seconds

An Efficient Camera Calibration Method for Head Pose Tracking (머리의 자세를 추적하기 위한 효율적인 카메라 보정 방법에 관한 연구)

  • Park, Gyeong-Su;Im, Chang-Ju;Lee, Gyeong-Tae
    • Journal of the Ergonomics Society of Korea
    • /
    • v.19 no.1
    • /
    • pp.77-90
    • /
    • 2000
  • The aim of this study is to develop and evaluate an efficient camera calibration method for vision-based head tracking. Tracking head movements is important in the design of an eye-controlled human/computer interface. A vision-based head tracking system was proposed to allow the user's head movements in the design of the eye-controlled human/computer interface. We proposed an efficient camera calibration method to track the 3D position and orientation of the user's head accurately. We also evaluated the performance of the proposed method. The experimental error analysis results showed that the proposed method can provide more accurate and stable pose (i.e. position and orientation) of the camera than the conventional direct linear transformation method which has been used in camera calibration. The results of this study can be applied to the tracking head movements related to the eye-controlled human/computer interface and the virtual reality technology.

  • PDF

Accuracy Verification of Optical Tracking System for the Maxillary Displacement Estimation by Using of Triangulation (삼각측량기법을 이용한 광학추적장치의 상악골 변위 계측에 대한 정확성 검증)

  • Kyung, Kyu-Young;Kim, Soung-Min;Lee, Jong-Ho;Myoung, Hoon;Kim, Myung-Jin
    • Maxillofacial Plastic and Reconstructive Surgery
    • /
    • v.34 no.1
    • /
    • pp.41-52
    • /
    • 2012
  • Purpose: Triangulation is the process of determining the location of a point by measuring angles to it from known points at either end of a fixed baseline. This point can be fixed as the third point of a triangle with one known side and two known angles. The aim of this study was to find a clinically adaptable method for applying an optical tracking navigation system to orthognathic surgery and to estimate its accuracy of measuring the bone displacement by use of triangulation methods. Methods: In orthognathic surgery, the head position is not fixed as in neurosurgery, so that a head tracker is needed to establish the reference point on the head surface byusing an optical tracking system. However, the operation field is interfered by its bulkiness that makes its clinical use difficult. To solve this problem, we designed a method using an Aquaplast splinting material and a mini-screw in applying a head tracker on a patient's forehead. After that, we estimated the accuracy of measuring displacements of the ball marker by an optical tracking system with a conventional head tracker (Group A) and with a newly designed head tracker (Group B). Measured values of ball markers' displacements by each optical tracking system were compared with values obtained from fusion CT images for an estimation of accuracy. Results: The accuracy of the optical tracking system with a conventional head tracker (Group A) is not suitable for clinical usage. Measured and predictable errors are larger than 10 mm. The optical tracking system with a newly designed head tracker (Group B) shows 1.59 mm, 6.34 mm, and 9.52 mm errorsin threeclinical cases. Conclusion: Most errors were brought on mainly from a lack of reproducibility of the head tracker position. The accuracy of the optical tracking system with a newly designed head tracker can be a useful method in further orthognathic navigation surgery even though the average error is higher than 2.0 mm.

Head tracking system using image processing (영상처리를 이용한 머리의 움직임 추적 시스템)

  • 박경수;임창주;반영환;장필식
    • Journal of the Ergonomics Society of Korea
    • /
    • v.16 no.3
    • /
    • pp.1-10
    • /
    • 1997
  • This paper is concerned with the development and evaluation of the camera calibration method for a real-time head tracking system. Tracking of head movements is important in the design of an eye-controlled human/computer interface and the area of virtual environment. We proposed a video-based head tracking system. A camera was mounted on the subject's head and it took the front view containing eight 3-dimensional reference points(passive retr0-reflecting markers) fixed at the known position(computer monitor). The reference points were captured by image processing board. These points were used to calculate the position (3-dimensional) and orientation of the camera. A suitable camera calibration method for providing accurate extrinsic camera parameters was proposed. The method has three steps. In the first step, the image center was calibrated using the method of varying focal length. In the second step, the focal length and the scale factor were calibrated from the Direct Linear Transformation (DLT) matrix obtained from the known position and orientation of the camera. In the third step, the position and orientation of the camera was calculated from the DLT matrix, using the calibrated intrinsic camera parameters. Experimental results showed that the average error of camera positions (3- dimensional) is about $0.53^{\circ}C$, the angular errors of camera orientations are less than $0.55^{\circ}C$and the data aquisition rate is about 10Hz. The results of this study can be applied to the tracking of head movements related to the eye-controlled human/computer interface and the virtual environment.

  • PDF

Musculoskeletal Kinematics During Voluntary Head Tracking Movements in Primate

  • Park, Hyeonki;Emily Keshner;Barry W. Peterson
    • Journal of Mechanical Science and Technology
    • /
    • v.17 no.1
    • /
    • pp.32-39
    • /
    • 2003
  • In this study we examined connections between vertebral motion and patterns of muscle activation during voluntary head tracking movements. A Rhesus (Maraca mulatta) monkey was trained to produce sinusoidal tracking movements of the head in the sagittal plane while seated. Radio-opaque markers were placed in the cervical vertebrae, and intramuscular patch electrodes were implanted to record from eight neck muscles. Videofluoroscopic images of cervical vertebral motion, and EMG (electromyographic) responses were simultaneously re-corded. Experimental results demonstrated that head and vertebrae moved synchronously and that motion occurred primarily at skull-C$_1$, C$\_$6/-C$\_$7/ and Csub 7/-C$_1$. Our findings illustrate that although the biomechanical constraints of each species may limit the number of solutions available, it is the task requirements that appear to govern CNS (central nervous system) selection of movement behaviors.

Probabilistic Head Tracking Based on Cascaded Condensation Filtering (순차적 파티클 필터를 이용한 다중증거기반 얼굴추적)

  • Kim, Hyun-Woo;Kee, Seok-Cheol
    • The Journal of Korea Robotics Society
    • /
    • v.5 no.3
    • /
    • pp.262-269
    • /
    • 2010
  • This paper presents a probabilistic head tracking method, mainly applicable to face recognition and human robot interaction, which can robustly track human head against various variations such as pose/scale change, illumination change, and background clutters. Compared to conventional particle filter based approaches, the proposed method can effectively track a human head by regularizing the sample space and sequentially weighting multiple visual cues, in the prediction and observation stages, respectively. Experimental results show the robustness of the proposed method, and it is worthy to be mentioned that some proposed probabilistic framework could be easily applied to other object tracking problems.

Compensation for Fast Mead Movements on Non-intrusive Eye Gaze Tracking System Using Kalman Filter (Kalman 필터를 이용한 비접촉식 응시점 추정 시스템에서의 빠른 머리 이동의 보정)

  • Kim, Soo-Chan;Yoo, Jae-Ha;Nam, Ki-Chang;Kim, Deok-Won
    • Proceedings of the KIEE Conference
    • /
    • 2005.05a
    • /
    • pp.33-35
    • /
    • 2005
  • We propose an eye gaze tracking system under natural head movements. The system consists of one CCD camera and two front-surface mirrors. The mirrors rotate to follow head movements in order to keep the eye within the view of the camera. However, the mirror controller cannot guarantee the fast head movements, because the frame rate is generally 30Hz. To overcome this problem, we applied Kalman predictor to estimate next eye position from the current eye image. In the results, our system allows the subjects head to move 50cm horizontally and 40cm vertically, with the speed about 10cm/sec and 6cm/sec, respectively. And spatial gaze resolutions are about 4.5 degree and 4.5 degree, respectively, and the gaze estimation accuracy is 92% under natural head movements.

  • PDF

Three-dimensional Head Tracking Using Adaptive Local Binary Pattern in Depth Images

  • Kim, Joongrock;Yoon, Changyong
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.16 no.2
    • /
    • pp.131-139
    • /
    • 2016
  • Recognition of human motions has become a main area of computer vision due to its potential human-computer interface (HCI) and surveillance. Among those existing recognition techniques for human motions, head detection and tracking is basis for all human motion recognitions. Various approaches have been tried to detect and trace the position of human head in two-dimensional (2D) images precisely. However, it is still a challenging problem because the human appearance is too changeable by pose, and images are affected by illumination change. To enhance the performance of head detection and tracking, the real-time three-dimensional (3D) data acquisition sensors such as time-of-flight and Kinect depth sensor are recently used. In this paper, we propose an effective feature extraction method, called adaptive local binary pattern (ALBP), for depth image based applications. Contrasting to well-known conventional local binary pattern (LBP), the proposed ALBP cannot only extract shape information without texture in depth images, but also is invariant distance change in range images. We apply the proposed ALBP for head detection and tracking in depth images to show its effectiveness and its usefulness.

Development of the Flexible Observation System for a Virtual Reality Excavator Using the Head Tracking System (헤드 트래킹 시스템을 이용한 가상 굴삭기의 편의 관측 시스템 개발)

  • Le, Q.H.;Jeong, Y.M.;Nguyen, C.T.;Yang, S.Y.
    • Journal of Drive and Control
    • /
    • v.12 no.2
    • /
    • pp.27-33
    • /
    • 2015
  • Excavators are versatile earthmoving equipment that are used in civil engineering, hydraulic engineering, grading and landscaping, pipeline construction and mining. Effective operator training is essential to ensure safe and efficient operating of the machine. The virtual reality excavator based on simulation using conventional large size monitors is limited by the inability to provide a realistic real world training experience. We proposed a flexible observation method with a head tracking system to improve user feeling and sensation when operating a virtual reality excavator. First, an excavation simulator is designed by combining an excavator SimMechanics model and the virtual world. Second, a head mounted display (HMD) device is presented to replace the cumbersome large screens. Moreover, an Inertial Measurement Unit (IMU) sensor is mounted to the HMD for tracking the movement of the operator's head. These signals consequently change the virtual viewpoint of the virtual reality excavator. Simulation results were used to analyze the performance of the proposed system.

Real Time Eye and Gaze Tracking (트래킹 Gaze와 실시간 Eye)

  • Min Jin-Kyoung;Cho Hyeon-Seob
    • Proceedings of the KAIS Fall Conference
    • /
    • 2004.11a
    • /
    • pp.234-239
    • /
    • 2004
  • This paper describes preliminary results we have obtained in developing a computer vision system based on active IR illumination for real time gaze tracking for interactive graphic display. Unlike most of the existing gaze tracking techniques, which often require assuming a static head to work well and require a cumbersome calibration process fur each person, our gaze tracker can perform robust and accurate gaze estimation without calibration and under rather significant head movement. This is made possible by a new gaze calibration procedure that identifies the mapping from pupil parameters to screen coordinates using the Generalized Regression Neural Networks (GRNN). With GRNN, the mapping does not have to be an analytical function and head movement is explicitly accounted for by the gaze mapping function. Furthermore, the mapping function can generalize to other individuals not used in the training. The effectiveness of our gaze tracker is demonstrated by preliminary experiments that involve gaze-contingent interactive graphic display.

  • PDF

A Vision-based Approach for Facial Expression Cloning by Facial Motion Tracking

  • Chun, Jun-Chul;Kwon, Oryun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.2 no.2
    • /
    • pp.120-133
    • /
    • 2008
  • This paper presents a novel approach for facial motion tracking and facial expression cloning to create a realistic facial animation of a 3D avatar. The exact head pose estimation and facial expression tracking are critical issues that must be solved when developing vision-based computer animation. In this paper, we deal with these two problems. The proposed approach consists of two phases: dynamic head pose estimation and facial expression cloning. The dynamic head pose estimation can robustly estimate a 3D head pose from input video images. Given an initial reference template of a face image and the corresponding 3D head pose, the full head motion is recovered by projecting a cylindrical head model onto the face image. It is possible to recover the head pose regardless of light variations and self-occlusion by updating the template dynamically. In the phase of synthesizing the facial expression, the variations of the major facial feature points of the face images are tracked by using optical flow and the variations are retargeted to the 3D face model. At the same time, we exploit the RBF (Radial Basis Function) to deform the local area of the face model around the major feature points. Consequently, facial expression synthesis is done by directly tracking the variations of the major feature points and indirectly estimating the variations of the regional feature points. From the experiments, we can prove that the proposed vision-based facial expression cloning method automatically estimates the 3D head pose and produces realistic 3D facial expressions in real time.