• Title/Summary/Keyword: face normalization

Search Result 76, Processing Time 0.025 seconds

Face inpainting via Learnable Structure Knowledge of Fusion Network

  • Yang, You;Liu, Sixun;Xing, Bin;Li, Kesen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.3
    • /
    • pp.877-893
    • /
    • 2022
  • With the development of deep learning, face inpainting has been significantly enhanced in the past few years. Although image inpainting framework integrated with generative adversarial network or attention mechanism enhanced the semantic understanding among facial components, the issues of reconstruction on corrupted regions are still worthy to explore, such as blurred edge structure, excessive smoothness, unreasonable semantic understanding and visual artifacts, etc. To address these issues, we propose a Learnable Structure Knowledge of Fusion Network (LSK-FNet), which learns a prior knowledge by edge generation network for image inpainting. The architecture involves two steps: Firstly, structure information obtained by edge generation network is used as the prior knowledge for face inpainting network. Secondly, both the generated prior knowledge and the incomplete image are fed into the face inpainting network together to get the fusion information. To improve the accuracy of inpainting, both of gated convolution and region normalization are applied in our proposed model. We evaluate our LSK-FNet qualitatively and quantitatively on the CelebA-HQ dataset. The experimental results demonstrate that the edge structure and details of facial images can be improved by using LSK-FNet. Our model surpasses the compared models on L1, PSNR and SSIM metrics. When the masked region is less than 20%, L1 loss reduce by more than 4.3%.

Facial Feature Extraction using Nasal Masks from 3D Face Image (코 형상 마스크를 이용한 3차원 얼굴 영상의 특징 추출)

  • 김익동;심재창
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.4
    • /
    • pp.1-7
    • /
    • 2004
  • This paper proposes a new method for facial feature extraction, and the method could be used to normalize face images for 3D face recognition. 3D images are much less sensitive than intensity images at a source of illumination, so it is possible to recognize people individually. But input face images may have variable poses such as rotating, Panning, and tilting. If these variances ire not considered, incorrect features could be extracted. And then, face recognition system result in bad matching. So it is necessary to normalize an input image in size and orientation. It is general to use geometrical facial features such as nose, eyes, and mouth in face image normalization steps. In particular, nose is the most prominent feature in 3D face image. So this paper describes a nose feature extraction method using 3D nasal masks that are similar to real nasal shape.

A Study on Illumination Normalization Method based on Bilateral Filter for Illumination Invariant Face Recognition (조명 환경에 강인한 얼굴인식 성능향상을 위한 Bilateral 필터 기반 조명 정규화 방법에 관한 연구)

  • Lee, Sang-Seop;Lee, Su-Young;Kim, Joong-Kyu
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.4
    • /
    • pp.49-55
    • /
    • 2010
  • Cast shadow caused by an illumination condition can produce troublesome effects for face recognition system using reflectance image. Consequently, we need to separate cast shadow area from feature area for improvement of recognition accuracy. A Bilateral filter smooths image while preserving edges, by means of a nonlinear combination of nearby pixel values. Processing such characteristics, this method is suited to our purpose in illumination estimation process based on Retinex. Therefore, in this paper, we propose a new illumination normalization method based on the Bilateral filter in face images. The proposed method produces a reflectance image that is preserved relatively exact cast shadow area, because coefficient of filter is designed to multiply proximity and discontinuity of pixels in input image. Performance of our method is measured by a recognition accuracy of principle component analysis(PCA) and evaluated to compare with other conventional illumination normalization methods.

Pose-normalized 3D Face Modeling for Face Recognition

  • Yu, Sun-Jin;Lee, Sang-Youn
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.35 no.12C
    • /
    • pp.984-994
    • /
    • 2010
  • Pose variation is a critical problem in face recognition. Three-dimensional(3D) face recognition techniques have been proposed, as 3D data contains depth information that may allow problems of pose variation to be handled more effectively than with 2D face recognition methods. This paper proposes a pose-normalized 3D face modeling method that translates and rotates any pose angle to a frontal pose using a plane fitting method by Singular Value Decomposition(SVD). First, we reconstruct 3D face data with stereo vision method. Second, nose peak point is estimated by depth information and then the angle of pose is estimated by a facial plane fitting algorithm using four facial features. Next, using the estimated pose angle, the 3D face is translated and rotated to a frontal pose. To demonstrate the effectiveness of the proposed method, we designed 2D and 3D face recognition experiments. The experimental results show that the performance of the normalized 3D face recognition method is superior to that of an un-normalized 3D face recognition method for overcoming the problems of pose variation.

Curvature and Histogram of oriented Gradients based 3D Face Recognition using Linear Discriminant Analysis

  • Lee, Yeunghak
    • Journal of Multimedia Information System
    • /
    • v.2 no.1
    • /
    • pp.171-178
    • /
    • 2015
  • This article describes 3 dimensional (3D) face recognition system using histogram of oriented gradients (HOG) based on face curvature. The surface curvatures in the face contain the most important personal feature information. In this paper, 3D face images are recognized by the face components: cheek, eyes, mouth, and nose. For the proposed approach, the first step uses the face curvatures which present the facial features for 3D face images, after normalization using the singular value decomposition (SVD). Fisherface method is then applied to each component curvature face. The reason for adapting the Fisherface method maintains the surface attribute for the face curvature, even though it can generate reduced image dimension. And histogram of oriented gradients (HOG) descriptor is one of the state-of-art methods which have been shown to significantly outperform the existing feature set for several objects detection and recognition. In the last step, the linear discriminant analysis is explained for each component. The experimental results showed that the proposed approach leads to higher detection accuracy rate than other methods.

Real-time Face Detection and Verification Method using PCA and LDA (PCA와 LDA를 이용한 실시간 얼굴 검출 및 검증 기법)

  • 홍은혜;고병철;변혜란
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.2
    • /
    • pp.213-223
    • /
    • 2004
  • In this paper, we propose a new face detection method for real-time applications. It is based on the template-matching and appearance-based method. At first, we apply Min-max normalization with histogram equalization to the input image according to the variation of intensity. By applying the PCA transform to both the input image and template, PC components are obtained and they are applied to the LDA transform. Then, we estimate the distances between the input image and template, and we select one region which has the smallest distance. SVM is used for final decision whether the candidate face region is a real face or not. Since we detect a face region not the full region but within the $\pm$12 search window, our method shows a good speed and detection rate. Through the experiments with 6 category input videos, our algorithm shows the better performance than the existing methods that use only the PCA transform. and the PCA and LDA transform.

Illumination Normalization Method for Robust Eye Detection in Lighting Changing Environment (조명변화에 강인한 눈 검출을 위한 조명 정규화 방법)

  • Xu, Chengzhe;Islam, Ihtesham Ul;Kim, In-Taek
    • Proceedings of the IEEK Conference
    • /
    • 2008.06a
    • /
    • pp.955-956
    • /
    • 2008
  • This paper presents a new method for illumination normalization in eye detection. Based on the retinex image formation model, we employ the discrete wavelet transform to remove the lighting effect in face image data. The final result based on the proposed method shows the better performance in detecting eyes compared with previous work.

  • PDF

Development of a Recognition System of Smile Facial Expression for Smile Treatment Training (웃음 치료 훈련을 위한 웃음 표정 인식 시스템 개발)

  • Li, Yu-Jie;Kang, Sun-Kyung;Kim, Young-Un;Jung, Sung-Tae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.4
    • /
    • pp.47-55
    • /
    • 2010
  • In this paper, we proposed a recognition system of smile facial expression for smile treatment training. The proposed system detects face candidate regions by using Haar-like features from camera images. After that, it verifies if the detected face candidate region is a face or non-face by using SVM(Support Vector Machine) classification. For the detected face image, it applies illumination normalization based on histogram matching in order to minimize the effect of illumination change. In the facial expression recognition step, it computes facial feature vector by using PCA(Principal Component Analysis) and recognizes smile expression by using a multilayer perceptron artificial network. The proposed system let the user train smile expression by recognizing the user's smile expression in real-time and displaying the amount of smile expression. Experimental result show that the proposed system improve the correct recognition rate by using face region verification based on SVM and using illumination normalization based on histogram matching.

Pose-Normalized 3D Face Modeling (포즈 정규화된 3D 얼굴 모델링 기법)

  • Yu, Sun-Jin;Kim, Sang-Ki;Kim, Il-Do;Lee, Sang-Youn
    • Proceedings of the IEEK Conference
    • /
    • 2006.06a
    • /
    • pp.455-456
    • /
    • 2006
  • This paper presents an automatic pose-normalized 3D face data acquisition method using 2D and 3D information. We propose an automatic pose-normalized 3D face acquisition method that accomplishes 3D face modeling and 3D face pose-normalization at once. The proposed method uses 2D information with AAM (Active Appearance Model) and 3D information with 3D normal vector. The 3D face modeling system consists of 2 cameras and 1 projector. In order to verify proposed pose-normalized 3D modeling method, we made an experiment for 2.5D face recognition. The experimental result shows that proposed method is robust against pose variation.

  • PDF

Implementation of Driver Fatigue Monitoring System (운전자 졸음 인식 시스템 구현)

  • Choi, Jin-Mo;Song, Hyok;Park, Sang-Hyun;Lee, Chul-Dong
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.8C
    • /
    • pp.711-720
    • /
    • 2012
  • In this paper, we introduce the implementation of driver fatigue monitering system and its result. Input video device is selected commercially available web-cam camera. Haar transform is used to face detection and adopted illumination normalization is used for arbitrary illumination conditions. Facial image through illumination normalization is extracted using Haar face features easily. Eye candidate area through illumination normalization can be reduced by anthropometric measurement and eye detection is performed by PCA and Circle Mask mixture model. This methods achieve robust eye detection on arbitrary illumination changing conditions. Drowsiness state is determined by the level on illumination normalize eye images by a simple calculation. Our system alarms and operates seatbelt on vibration through controller area network(CAN) when the driver's doze level is detected. Our algorithm is implemented with low computation complexity and high recognition rate. We achieve 97% of correct detection rate through in-car environment experiments.