• Title/Summary/Keyword: face pose transform

Search Result 15, Processing Time 0.022 seconds

Face Recognition Robust to Pose Variations (포즈 변화에 강인한 얼굴 인식)

  • 노진우;문인혁;고한석
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.5
    • /
    • pp.63-69
    • /
    • 2004
  • This paper proposes a novel method for achieving pose-invariant face recognition using cylindrical model. On the assumption that a face is shaped like that of a cylinder, we estimate the object's pose and then extract the frontal face image via a pose transform with previously estimated pose angle. By employing the proposed pose transform technique we can increase the face recognition performance using the frontal face images. Through representative experiments, we achieved an increased recognition rate from 61.43% to 94.76% by the pose transform. Additionally, the recognition rate with the proposed method achieves as good as that of the more complicated 3D face model.

Face Pose Transformation for Pose Invariant Face Recognition (포즈에 독립적인 얼굴 인식을 위한 얼굴 포즈 변환)

  • Park Hyun-Sun;Park Jong-Il;Kim Whoi-Yul
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.6C
    • /
    • pp.570-576
    • /
    • 2005
  • Recognition of posed face is one of the most challenging problems in the field of face recognition. In this paper, as a preprocessing step for recognizing such faces, a method to transform non-frontal face images into frontal face images is proposed. The linear relationship between eigenfaces is utilized to obtain a pose transform matrix. The proposed method is verified with a well-known face recognition algorithm based on PCA/LDA. Compared to the conventional algorithm applied to the original posed face images, our experimental results indicated that the proposed method contributes to improve the recognition rate of such faces by $20\%$.

Performance Improvement for Robust Eye Detection Algorithm under Environmental Changes (환경변화에 강인한 눈 검출 알고리즘 성능향상 연구)

  • Ha, Jin-gwan;Moon, Hyeon-joon
    • Journal of Digital Convergence
    • /
    • v.14 no.10
    • /
    • pp.271-276
    • /
    • 2016
  • In this paper, we propose robust face and eye detection algorithm under changing environmental condition such as lighting and pose variations. Generally, the eye detection process is performed followed by face detection and variations in pose and lighting affects the detection performance. Therefore, we have explored face detection based on Modified Census Transform algorithm. The eye has dominant features in face area and is sensitive to lighting condition and eye glasses, etc. To address these issues, we propose a robust eye detection method based on Gabor transformation and Features from Accelerated Segment Test algorithms. Proposed algorithm presents 27.4ms in detection speed with 98.4% correct detection rate, and 36.3ms face detection speed with 96.4% correct detection rate for eye detection performance.

Accurate Face Pose Estimation and Synthesis Using Linear Transform Among Face Models (얼굴 모델간 선형변환을 이용한 정밀한 얼굴 포즈추정 및 포즈합성)

  • Suvdaa, B.;Ko, J.
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.4
    • /
    • pp.508-515
    • /
    • 2012
  • This paper presents a method that estimates face pose for a given face image and synthesizes any posed face images using Active Appearance Model(AAM). The AAM that having been successfully applied to various applications is an example-based learning model and learns the variations of training examples. However, with a single model, it is difficult to handle large pose variations of face images. This paper proposes to build a model covering only a small range of angle for each pose. Then, with a proper model for a given face image, we can achieve accurate pose estimation and synthesis. In case of the model used for pose estimation was not trained with the angle to synthesize, we solve this problem by training the linear relationship between the models in advance. In the experiments on Yale B public face database, we present the accurate pose estimation and pose synthesis results. For our face database having large pose variations, we demonstrate successful frontal pose synthesis results.

Robust pupil detection and gaze tracking under occlusion of eyes

  • Lee, Gyung-Ju;Kim, Jin-Suh;Kim, Gye-Young
    • Journal of the Korea Society of Computer and Information
    • /
    • v.21 no.10
    • /
    • pp.11-19
    • /
    • 2016
  • The size of a display is large, The form becoming various of that do not apply to previous methods of gaze tracking and if setup gaze-track-camera above display, can solve the problem of size or height of display. However, This method can not use of infrared illumination information of reflected cornea using previous methods. In this paper, Robust pupil detecting method for eye's occlusion, corner point of inner eye and center of pupil, and using the face pose information proposes a method for calculating the simply position of the gaze. In the proposed method, capture the frame for gaze tracking that according to position of person transform camera mode of wide or narrow angle. If detect the face exist in field of view(FOV) in wide mode of camera, transform narrow mode of camera calculating position of face. The frame captured in narrow mode of camera include gaze direction information of person in long distance. The method for calculating the gaze direction consist of face pose estimation and gaze direction calculating step. Face pose estimation is estimated by mapping between feature point of detected face and 3D model. To calculate gaze direction the first, perform ellipse detect using splitting from iris edge information of pupil and if occlusion of pupil, estimate position of pupil with deformable template. Then using center of pupil and corner point of inner eye, face pose information calculate gaze position at display. In the experiment, proposed gaze tracking algorithm in this paper solve the constraints that form of a display, to calculate effectively gaze direction of person in the long distance using single camera, demonstrate in experiments by distance.

Pose-invariant Face Recognition using a Cylindrical Model and Stereo Camera (원통 모델과 스테레오 카메라를 이용한 포즈 변화에 강인한 얼굴인식)

  • 노진우;홍정화;고한석
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.7
    • /
    • pp.929-938
    • /
    • 2004
  • This paper proposes a pose-invariant face recognition method using cylindrical model and stereo camera. We divided this paper into two parts. One is single input image case, the other is stereo input image case. In single input image case, we normalized a face's yaw pose using cylindrical model, and in stereo input image case, we normalized a face's pitch pose using cylindrical model with previously estimated pitch pose angle by the stereo geometry. Also, since we have an advantage that we can utilize two images acquired at the same time, we can increase overall recognition performance by decision-level fusion. Through representative experiments, we achieved an increased recognition rate from 61.43% to 94.76% by the yaw pose transform, and the recognition rate with the proposed method achieves as good as that of the more complicated 3D face model. Also, by using stereo camera system we achieved an increased recognition rate 5.24% more for the case of upper face pose, and 3.34% more by decision-level fusion.

3D Face Alignment and Normalization Based on Feature Detection Using Active Shape Models : Quantitative Analysis on Aligning Process (ASMs을 이용한 특징점 추출에 기반한 3D 얼굴데이터의 정렬 및 정규화 : 정렬 과정에 대한 정량적 분석)

  • Shin, Dong-Won;Park, Sang-Jun;Ko, Jae-Pil
    • Korean Journal of Computational Design and Engineering
    • /
    • v.13 no.6
    • /
    • pp.403-411
    • /
    • 2008
  • The alignment of facial images is crucial for 2D face recognition. This is the same to facial meshes for 3D face recognition. Most of the 3D face recognition methods refer to 3D alignment but do not describe their approaches in details. In this paper, we focus on describing an automatic 3D alignment in viewpoint of quantitative analysis. This paper presents a framework of 3D face alignment and normalization based on feature points obtained by Active Shape Models (ASMs). The positions of eyes and mouth can give possibility of aligning the 3D face exactly in three-dimension space. The rotational transform on each axis is defined with respect to the reference position. In aligning process, the rotational transform converts an input 3D faces with large pose variations to the reference frontal view. The part of face is flopped from the aligned face using the sphere region centered at the nose tip of 3D face. The cropped face is shifted and brought into the frame with specified size for normalizing. Subsequently, the interpolation is carried to the face for sampling at equal interval and filling holes. The color interpolation is also carried at the same interval. The outputs are normalized 2D and 3D face which can be used for face recognition. Finally, we carry two sets of experiments to measure aligning errors and evaluate the performance of suggested process.

Viewpoint Unconstrained Face Recognition Based on Affine Local Descriptors and Probabilistic Similarity

  • Gao, Yongbin;Lee, Hyo Jong
    • Journal of Information Processing Systems
    • /
    • v.11 no.4
    • /
    • pp.643-654
    • /
    • 2015
  • Face recognition under controlled settings, such as limited viewpoint and illumination change, can achieve good performance nowadays. However, real world application for face recognition is still challenging. In this paper, we propose using the combination of Affine Scale Invariant Feature Transform (SIFT) and Probabilistic Similarity for face recognition under a large viewpoint change. Affine SIFT is an extension of SIFT algorithm to detect affine invariant local descriptors. Affine SIFT generates a series of different viewpoints using affine transformation. In this way, it allows for a viewpoint difference between the gallery face and probe face. However, the human face is not planar as it contains significant 3D depth. Affine SIFT does not work well for significant change in pose. To complement this, we combined it with probabilistic similarity, which gets the log likelihood between the probe and gallery face based on sum of squared difference (SSD) distribution in an offline learning process. Our experiment results show that our framework achieves impressive better recognition accuracy than other algorithms compared on the FERET database.

Robust AAM-based Face Tracking with Occlusion Using SIFT Features (SIFT 특징을 이용하여 중첩상황에 강인한 AAM 기반 얼굴 추적)

  • Eom, Sung-Eun;Jang, Jun-Su
    • The KIPS Transactions:PartB
    • /
    • v.17B no.5
    • /
    • pp.355-362
    • /
    • 2010
  • Face tracking is to estimate the motion of a non-rigid face together with a rigid head in 3D, and plays important roles in higher levels such as face/facial expression/emotion recognition. In this paper, we propose an AAM-based face tracking algorithm. AAM has been widely used to segment and track deformable objects, but there are still many difficulties. Particularly, it often tends to diverge or converge into local minima when a target object is self-occluded, partially or completely occluded. To address this problem, we utilize the scale invariant feature transform (SIFT). SIFT is an effective method for self and partial occlusion because it is able to find correspondence between feature points under partial loss. And it enables an AAM to continue to track without re-initialization in complete occlusions thanks to the good performance of global matching. We also register and use the SIFT features extracted from multi-view face images during tracking to effectively track a face across large pose changes. Our proposed algorithm is validated by comparing other algorithms under the above 3 kinds of occlusions.

Head Pose Estimation Based on Perspective Projection Using PTZ Camera (원근투영법 기반의 PTZ 카메라를 이용한 머리자세 추정)

  • Kim, Jin Suh;Lee, Gyung Ju;Kim, Gye Young
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.7 no.7
    • /
    • pp.267-274
    • /
    • 2018
  • This paper describes a head pose estimation method using PTZ(Pan-Tilt-Zoom) camera. When the external parameters of a camera is changed by rotation and translation, the estimated face pose for the same head also varies. In this paper, we propose a new method to estimate the head pose independently on varying the parameters of PTZ camera. The proposed method consists of 3 steps: face detection, feature extraction, and pose estimation. For each step, we respectively use MCT(Modified Census Transform) feature, the facial regression tree method, and the POSIT(Pose from Orthography and Scaling with ITeration) algorithm. The existing POSIT algorithm does not consider the rotation of a camera, but this paper improves the POSIT based on perspective projection in order to estimate the head pose robustly even when the external parameters of a camera are changed. Through experiments, we confirmed that RMSE(Root Mean Square Error) of the proposed method improve $0.6^{\circ}$ less then the conventional method.