• Title/Summary/Keyword: Face recognition/tracking

Search Result 103, Processing Time 0.021 seconds

Tracking by Detection of Multiple Faces using SSD and CNN Features

  • Tai, Do Nhu;Kim, Soo-Hyung;Lee, Guee-Sang;Yang, Hyung-Jeong;Na, In-Seop;Oh, A-Ran
    • Smart Media Journal
    • /
    • v.7 no.4
    • /
    • pp.61-69
    • /
    • 2018
  • Multi-tracking of general objects and specific faces is an important topic in the field of computer vision applicable to many branches of industry such as biometrics, security, etc. The rapid development of deep neural networks has resulted in a dramatic improvement in face recognition and object detection problems, which helps improve the multiple-face tracking techniques exploiting the tracking-by-detection method. Our proposed method uses face detection trained with a head dataset to resolve the face deformation problem in the tracking process. Further, we use robust face features extracted from the deep face recognition network to match the tracklets with tracking faces using Hungarian matching method. We achieved promising results regarding the usage of deep face features and head detection in a face tracking benchmark.

Face Tracking System for Efficient Face Recognition in Intelligent Digital TV (지능형 디지털 TV에서 효율적인 얼굴 인식을 위한 얼굴 추적 시스템 구현)

  • Kwon, Ki-Poong;Kim, Seung-Gu;Kim, Seung-Kyun;Hwang, Min-Cheol;Ko, Sung-Jea
    • Proceedings of the IEEK Conference
    • /
    • 2006.06a
    • /
    • pp.267-268
    • /
    • 2006
  • Advanced TV makes the life more convenient for the viewers and it is based on the recognition technology. In this paper, we propose the implementation of face tracking system for efficient face recognition in intelligent digital TV. To recognize the face, face detection should be performed earlier. We use the motion information to track the face. Continuous face tracking is possible by using continuous detected face region and motion information. Thus the computational complexity of the recognition module in the whole system can be reduced.

  • PDF

Greedy Learning of Sparse Eigenfaces for Face Recognition and Tracking

  • Kim, Minyoung
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.14 no.3
    • /
    • pp.162-170
    • /
    • 2014
  • Appearance-based subspace models such as eigenfaces have been widely recognized as one of the most successful approaches to face recognition and tracking. The success of eigenfaces mainly has its origins in the benefits offered by principal component analysis (PCA), the representational power of the underlying generative process for high-dimensional noisy facial image data. The sparse extension of PCA (SPCA) has recently received significant attention in the research community. SPCA functions by imposing sparseness constraints on the eigenvectors, a technique that has been shown to yield more robust solutions in many applications. However, when SPCA is applied to facial images, the time and space complexity of PCA learning becomes a critical issue (e.g., real-time tracking). In this paper, we propose a very fast and scalable greedy forward selection algorithm for SPCA. Unlike a recent semidefinite program-relaxation method that suffers from complex optimization, our approach can process several thousands of data dimensions in reasonable time with little accuracy loss. The effectiveness of our proposed method was demonstrated on real-world face recognition and tracking datasets.

3D Facial Landmark Tracking and Facial Expression Recognition

  • Medioni, Gerard;Choi, Jongmoo;Labeau, Matthieu;Leksut, Jatuporn Toy;Meng, Lingchao
    • Journal of information and communication convergence engineering
    • /
    • v.11 no.3
    • /
    • pp.207-215
    • /
    • 2013
  • In this paper, we address the challenging computer vision problem of obtaining a reliable facial expression analysis from a naturally interacting person. We propose a system that combines a 3D generic face model, 3D head tracking, and 2D tracker to track facial landmarks and recognize expressions. First, we extract facial landmarks from a neutral frontal face, and then we deform a 3D generic face to fit the input face. Next, we use our real-time 3D head tracking module to track a person's head in 3D and predict facial landmark positions in 2D using the projection from the updated 3D face model. Finally, we use tracked 2D landmarks to update the 3D landmarks. This integrated tracking loop enables efficient tracking of the non-rigid parts of a face in the presence of large 3D head motion. We conducted experiments for facial expression recognition using both framebased and sequence-based approaches. Our method provides a 75.9% recognition rate in 8 subjects with 7 key expressions. Our approach provides a considerable step forward toward new applications including human-computer interactions, behavioral science, robotics, and game applications.

Robust Real-time Tracking of Facial Features with Application to Emotion Recognition (안정적인 실시간 얼굴 특징점 추적과 감정인식 응용)

  • Ahn, Byungtae;Kim, Eung-Hee;Sohn, Jin-Hun;Kweon, In So
    • The Journal of Korea Robotics Society
    • /
    • v.8 no.4
    • /
    • pp.266-272
    • /
    • 2013
  • Facial feature extraction and tracking are essential steps in human-robot-interaction (HRI) field such as face recognition, gaze estimation, and emotion recognition. Active shape model (ASM) is one of the successful generative models that extract the facial features. However, applying only ASM is not adequate for modeling a face in actual applications, because positions of facial features are unstably extracted due to limitation of the number of iterations in the ASM fitting algorithm. The unaccurate positions of facial features decrease the performance of the emotion recognition. In this paper, we propose real-time facial feature extraction and tracking framework using ASM and LK optical flow for emotion recognition. LK optical flow is desirable to estimate time-varying geometric parameters in sequential face images. In addition, we introduce a straightforward method to avoid tracking failure caused by partial occlusions that can be a serious problem for tracking based algorithm. Emotion recognition experiments with k-NN and SVM classifier shows over 95% classification accuracy for three emotions: "joy", "anger", and "disgust".

A New Face Tracking and Recognition Method Adapted to the Environment (환경에 적응적인 얼굴 추적 및 인식 방법)

  • Ju, Myung-Ho;Kang, Hang-Bong
    • The KIPS Transactions:PartB
    • /
    • v.16B no.5
    • /
    • pp.385-394
    • /
    • 2009
  • Face tracking and recognition are difficult problems because the face is a non-rigid object. The main reasons for the failure to track and recognize the faces are the changes of a face pose and environmental illumination. To solve these problems, we propose a nonlinear manifold framework for the face pose and the face illumination normalization processing. Specifically, to track and recognize a face on the video that has various pose variations, we approximate a face pose density to single Gaussian density by PCA(Principle Component Analysis) using images sampled from training video sequences and then construct the GMM(Gaussian Mixture Model) for each person. To solve the illumination problem for the face tracking and recognition, we decompose the face images into the reflectance and the illuminance using the SSR(Single Scale Retinex) model. To obtain the normalized reflectance, the reflectance is rescaled by histogram equalization on the defined range. We newly approximate the illuminance by the trained manifold since the illuminance has almost variations by illumination. By combining these two features into our manifold framework, we derived the efficient face tracking and recognition results on indoor and outdoor video. To improve the video based tracking results, we update the weights of each face pose density at each frame by the tracking result at the previous frame using EM algorithm. Our experimental results show that our method is more efficient than other methods.

Real-Time Face Detection, Tracking and Tilted Face Image Correction System Using Multi-Color Model and Face Feature (복합 칼라모델과 얼굴 특징자를 이용한 실시간 얼굴 검출 추적과 기울어진 얼굴보정 시스템)

  • Lee Eung-Joo
    • Journal of Korea Multimedia Society
    • /
    • v.9 no.4
    • /
    • pp.470-481
    • /
    • 2006
  • In this paper, we propose a real-time face detection, tracking and tilted face image correction system using multi-color model and face feature information. In the proposed system, we detect face candidate using YCbCr and YIQ color model. And also, we detect face using vertical and horizontal projection method and track people's face using Hausdorff matching method. And also, we correct tilted face with the correction of tilted eye features. The experiments have been performed for 110 test images and shows good performance. Experimental results show that the proposed algorithm robust to detection and tracking of face at real-time with the change of exterior condition and recognition of tilted face. Accordingly face detection and tilted face correction rate displayed 92.27% and 92.70% respectively and proposed algorithm shows 90.0% successive recognition rate.

  • PDF

Face Tracking and Recognition in Video with PCA-based Pose-Classification and (2D)2PCA recognition algorithm (비디오속의 얼굴추적 및 PCA기반 얼굴포즈분류와 (2D)2PCA를 이용한 얼굴인식)

  • Kim, Jin-Yul;Kim, Yong-Seok
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.23 no.5
    • /
    • pp.423-430
    • /
    • 2013
  • In typical face recognition systems, the frontal view of face is preferred to reduce the complexity of the recognition. Thus individuals may be required to stare into the camera, or the camera should be located so that the frontal images are acquired easily. However these constraints severely restrict the adoption of face recognition to wide applications. To alleviate this problem, in this paper, we address the problem of tracking and recognizing faces in video captured with no environmental control. The face tracker extracts a sequence of the angle/size normalized face images using IVT (Incremental Visual Tracking) algorithm that is known to be robust to changes in appearance. Since no constraints have been imposed between the face direction and the video camera, there will be various poses in face images. Thus the pose is identified using a PCA (Principal Component Analysis)-based pose classifier, and only the pose-matched face images are used to identify person against the pre-built face DB with 5-poses. For face recognition, PCA, (2D)PCA, and $(2D)^2PCA$ algorithms have been tested to compute the recognition rate and the execution time.

Implementing Augmented Reality By Using Face Detection, Recognition And Motion Tracking (얼굴 검출과 인식 및 모션추적에 의한 증강현실 구현)

  • Lee, Hee-Man
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.1
    • /
    • pp.97-104
    • /
    • 2012
  • Natural User Interface(NUI) technologies introduce new trends in using devices such as computer and any other electronic devices. In this paper, an augmented reality on a mobile device is implemented by using face detection, recognition and motion tracking. The face detection is obtained by using Viola-Jones algorithm from the images of the front camera. The Eigenface algorithm is employed for face recognition and face motion tracking. The augmented reality is implemented by overlapping the rear camera image and GPS, accelerator sensors' data with the 3D graphic object which is correspond with the recognized face. The algorithms and methods are limited by the mobile device specification such as processing ability and main memory capacity.

Design of Face Recognition and Tracking System by Using RBFNNs Pattern Classifier with Object Tracking Algorithm (RBFNNs 패턴분류기와 객체 추적 알고리즘을 이용한 얼굴인식 및 추적 시스템 설계)

  • Oh, Seung-Hun;Oh, Sung-Kwun;Kim, Jin-Yul
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.64 no.5
    • /
    • pp.766-778
    • /
    • 2015
  • In this paper, we design a hybrid system for recognition and tracking realized with the aid of polynomial based RBFNNs pattern classifier and particle filter. The RBFNN classifier is built by learning the training data for diverse pose images. The optimized parameters of RBFNN classifier are obtained by Particle Swarm Optimization(PSO). Testing data for pose image is used as a face image obtained under real situation, where the face image is detected by AdaBoost algorithm. In order to improve the recognition performance for a detected image, pose estimation as preprocessing step is carried out before the face recognition step. PCA is used for pose estimation, the pose of detected image is assigned for the built pose by considering the featured difference between the previously built pose image and the newly detected image. The recognition of detected image is performed through polynomial based RBFNN pattern classifier, and if the detected image is equal to target for tracking, the target will be traced by particle filter in real time. Moreover, when tracking is failed by PF, Adaboost algorithm detects facial area again, and the procedures of both the pose estimation and the image recognition are repeated as mentioned above. Finally, experimental results are compared and analyzed by using Honda/UCSD data known as benchmark DB.