• Title/Summary/Keyword: feature-based tracking

Search Result 315, Processing Time 0.033 seconds

Robust AAM-based Face Tracking with Occlusion Using SIFT Features (SIFT 특징을 이용하여 중첩상황에 강인한 AAM 기반 얼굴 추적)

  • Eom, Sung-Eun;Jang, Jun-Su
    • The KIPS Transactions:PartB
    • /
    • v.17B no.5
    • /
    • pp.355-362
    • /
    • 2010
  • Face tracking is to estimate the motion of a non-rigid face together with a rigid head in 3D, and plays important roles in higher levels such as face/facial expression/emotion recognition. In this paper, we propose an AAM-based face tracking algorithm. AAM has been widely used to segment and track deformable objects, but there are still many difficulties. Particularly, it often tends to diverge or converge into local minima when a target object is self-occluded, partially or completely occluded. To address this problem, we utilize the scale invariant feature transform (SIFT). SIFT is an effective method for self and partial occlusion because it is able to find correspondence between feature points under partial loss. And it enables an AAM to continue to track without re-initialization in complete occlusions thanks to the good performance of global matching. We also register and use the SIFT features extracted from multi-view face images during tracking to effectively track a face across large pose changes. Our proposed algorithm is validated by comparing other algorithms under the above 3 kinds of occlusions.

Robust Real-time Tracking of Facial Features with Application to Emotion Recognition (안정적인 실시간 얼굴 특징점 추적과 감정인식 응용)

  • Ahn, Byungtae;Kim, Eung-Hee;Sohn, Jin-Hun;Kweon, In So
    • The Journal of Korea Robotics Society
    • /
    • v.8 no.4
    • /
    • pp.266-272
    • /
    • 2013
  • Facial feature extraction and tracking are essential steps in human-robot-interaction (HRI) field such as face recognition, gaze estimation, and emotion recognition. Active shape model (ASM) is one of the successful generative models that extract the facial features. However, applying only ASM is not adequate for modeling a face in actual applications, because positions of facial features are unstably extracted due to limitation of the number of iterations in the ASM fitting algorithm. The unaccurate positions of facial features decrease the performance of the emotion recognition. In this paper, we propose real-time facial feature extraction and tracking framework using ASM and LK optical flow for emotion recognition. LK optical flow is desirable to estimate time-varying geometric parameters in sequential face images. In addition, we introduce a straightforward method to avoid tracking failure caused by partial occlusions that can be a serious problem for tracking based algorithm. Emotion recognition experiments with k-NN and SVM classifier shows over 95% classification accuracy for three emotions: "joy", "anger", and "disgust".

Study on a Robust Object Tracking Algorithm Based on Improved SURF Method with CamShift

  • Ahn, Hyochang;Shin, In-Kyoung
    • Journal of the Korea Society of Computer and Information
    • /
    • v.23 no.1
    • /
    • pp.41-48
    • /
    • 2018
  • Recently, surveillance systems are widely used, and one of the key technologies in this surveillance system is to recognize and track objects. In order to track a moving object robustly and efficiently in a complex environment, it is necessary to extract the feature points in the interesting object and to track the object using the feature points. In this paper, we propose a method to track interesting objects in real time by eliminating unnecessary information from objects, generating feature point descriptors using only key feature points, and reducing computational complexity for object recognition. Experimental results show that the proposed method is faster and more robust than conventional methods, and can accurately track objects in various environments.

A Vision-based Approach for Facial Expression Cloning by Facial Motion Tracking

  • Chun, Jun-Chul;Kwon, Oryun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.2 no.2
    • /
    • pp.120-133
    • /
    • 2008
  • This paper presents a novel approach for facial motion tracking and facial expression cloning to create a realistic facial animation of a 3D avatar. The exact head pose estimation and facial expression tracking are critical issues that must be solved when developing vision-based computer animation. In this paper, we deal with these two problems. The proposed approach consists of two phases: dynamic head pose estimation and facial expression cloning. The dynamic head pose estimation can robustly estimate a 3D head pose from input video images. Given an initial reference template of a face image and the corresponding 3D head pose, the full head motion is recovered by projecting a cylindrical head model onto the face image. It is possible to recover the head pose regardless of light variations and self-occlusion by updating the template dynamically. In the phase of synthesizing the facial expression, the variations of the major facial feature points of the face images are tracked by using optical flow and the variations are retargeted to the 3D face model. At the same time, we exploit the RBF (Radial Basis Function) to deform the local area of the face model around the major feature points. Consequently, facial expression synthesis is done by directly tracking the variations of the major feature points and indirectly estimating the variations of the regional feature points. From the experiments, we can prove that the proposed vision-based facial expression cloning method automatically estimates the 3D head pose and produces realistic 3D facial expressions in real time.

Feature-based Object Tracking using an Active Camera (능동카메라를 이용한 특징기반의 물체추적)

  • 정영기;호요성
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.8 no.3
    • /
    • pp.694-701
    • /
    • 2004
  • In this paper, we proposed a feature-based tracking system that traces moving objects with a pan-tilt camera after separating the global motion of an active camera and the local motion of moving objects. The tracking system traces only the local motion of the comer features in the foreground objects by finding the block motions between two consecutive frames using a block-based motion estimation and eliminating the global motion from the block motions. For the robust estimation of the camera motion using only the background motion, we suggest a dominant motion extraction to classify the background motions from the block motions. We also propose an efficient clustering algorithm based on the attributes of motion trajectories of corner features to remove the motions of noise objects from the separated local motion. The proposed tracking system has demonstrated good performance for several test video sequences.

Visual Tracking using Weighted Discriminative Correlation Filter

  • Song, Tae-Eun;Jang, Kyung-Hyun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.21 no.11
    • /
    • pp.49-57
    • /
    • 2016
  • In this paper, we propose the novel tracking method which uses the weighted discriminative correlation filter (DCF). We also propose the PSPR instead of conventional PSR as tracker performance evaluation method. The proposed tracking method uses multiple DCF to estimates the target position. In addition, our proposed method reflects more weights on the correlation response of the tracker which is expected to have more performance using PSPR. While existing multi-DCF-based tracker calculates the final correlation response by directly summing correlation responses from each tracker, the proposed method acquires the final correlation response by weighted combining of correlation responses from the selected trackers robust to given environment. Accordingly, the proposed method can provide high performance tracking in various and complex background compared to multi-DCF based tracker. Through a series of tracking experiments for various video data, the presented method showed better performance than a single feature-based tracker and also than a multi-DCF based tracker.

CONTINUOUS PERSON TRACKING ACROSS MULTIPLE ACTIVE CAMERAS USING SHAPE AND COLOR CUES

  • Bumrungkiat, N.;Aramvith, S.;Chalidabhongse, T.H.
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.136-141
    • /
    • 2009
  • This paper proposed a framework for handover method in continuously tracking a person of interest across cooperative pan-tilt-zoom (PTZ) cameras. The algorithm here is based on a robust non-parametric technique for climbing density gradients to find the peak of probability distributions called the mean shift algorithm. Most tracking algorithms use only one cue (such as color). The color features are not always discriminative enough for target localization because illumination or viewpoints tend to change. Moreover the background may be of a color similar to that of the target. In our proposed system, the continuous person tracking across cooperative PTZ cameras by mean shift tracking that using color and shape histogram to be feature distributions. Color and shape distributions of interested person are used to register the target person across cameras. For the first camera, we select interested person for tracking using skin color, cloth color and boundary of body. To handover tracking process between two cameras, the second camera receives color and shape cues of a target person from the first camera and using linear color calibration to help with handover process. Our experimental results demonstrate color and shape feature in mean shift algorithm is capable for continuously and accurately track the target person across cameras.

  • PDF

Moving Object Tracking Using Active Contour Model (동적 윤곽 모델을 이용한 이동 물체 추적)

  • Han, Kyu-Bum;Baek, Yoon-Su
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.27 no.5
    • /
    • pp.697-704
    • /
    • 2003
  • In this paper, the visual tracking system for arbitrary shaped moving object is proposed. The established tracking system can be divided into model based method that needs previous model for target object and image based method that uses image feature. In the model based method, the reliable tracking is possible, but simplification of the shape is necessary and the application is restricted to definite target mod el. On the other hand, in the image based method, the process speed can be increased, but the shape information is lost and the tracking system is sensitive to image noise. The proposed tracking system is composed of the extraction process that recognizes the existence of moving object and tracking process that extracts dynamic characteristics and shape information of the target objects. Specially, active contour model is used to effectively track the object that is undergoing shape change. In initializatio n process of the contour model, the semi-automatic operation can be avoided and the convergence speed of the contour can be increased by the proposed effective initialization method. Also, for the efficient solution of the correspondence problem in multiple objects tracking, the variation function that uses the variation of position structure in image frame and snake energy level is proposed. In order to verify the validity and effectiveness of the proposed tracking system, real time tracking experiment for multiple moving objects is implemented.

A Study on Seam Tracking and Weld Defects Detecting for Automated Pipe Welding by Using Double Vision Sensors (파이프 용접에서 다중 시각센서를 이용한 용접선 추적 및 용접결함 측정에 관한 연구)

  • 송형진;이승기;강윤희;나석주
    • Journal of Welding and Joining
    • /
    • v.21 no.1
    • /
    • pp.60-65
    • /
    • 2003
  • At present. welding of most pipes with large diameter is carried out by the manual process. Automation of the welding process is necessary f3r the sake of consistent weld quality and improvement in productivity. In this study, two vision sensors, based on the optical triangulation, were used to obtain the information for seam tracking and detecting the weld defects. Through utilization of the vision sensors, noises were removed, images and 3D information obtained and positions of the feature points detected. The aforementioned process provided the seam and leg position data, calculated the magnitude of the gap, fillet area and leg length and judged the weld defects by ISO 5817. Noises in the images were removed by using the gradient values of the laser stripe's coordinates and various feature points were detected by using an algorithm based on the iterative polygon approximation method. Since the process time is very important, all the aforementioned processes should be conducted during welding.

A Low Complexity, Descriptor-Less SIFT Feature Tracking System

  • Fransioli, Brian;Lee, Hyuk-Jae
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2012.07a
    • /
    • pp.269-270
    • /
    • 2012
  • Features which exhibit scale and rotation invariance, such as SIFT, are notorious for expensive computation time, and often overlooked for real-time tracking scenarios. This paper proposes a descriptorless matching algorithm based on motion vectors between consecutive frames to find the geometrically closest candidate to each tracked reference feature in the database. Descriptor-less matching forgoes expensive SIFT descriptor extraction without loss of matching accuracy and exhibits dramatic speed-up compared to traditional, naive matching based trackers. Descriptor-less SIFT tracking runs in real-time on an Intel dual core machine at an average of 24 frames per second.

  • PDF