• Title/Summary/Keyword: Training Face Image

Search Result 125, Processing Time 0.027 seconds

A Study On Three-dimensional Optimized Face Recognition Model : Comparative Studies and Analysis of Model Architectures (3차원 얼굴인식 모델에 관한 연구: 모델 구조 비교연구 및 해석)

  • Park, Chan-Jun;Oh, Sung-Kwun;Kim, Jin-Yul
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.64 no.6
    • /
    • pp.900-911
    • /
    • 2015
  • In this paper, 3D face recognition model is designed by using Polynomial based RBFNN(Radial Basis Function Neural Network) and PNN(Polynomial Neural Network). Also recognition rate is performed by this model. In existing 2D face recognition model, the degradation of recognition rate may occur in external environments such as face features using a brightness of the video. So 3D face recognition is performed by using 3D scanner for improving disadvantage of 2D face recognition. In the preprocessing part, obtained 3D face images for the variation of each pose are changed as front image by using pose compensation. The depth data of face image shape is extracted by using Multiple point signature. And whole area of face depth information is obtained by using the tip of a nose as a reference point. Parameter optimization is carried out with the aid of both ABC(Artificial Bee Colony) and PSO(Particle Swarm Optimization) for effective training and recognition. Experimental data for face recognition is built up by the face images of students and researchers in IC&CI Lab of Suwon University. By using the images of 3D face extracted in IC&CI Lab. the performance of 3D face recognition is evaluated and compared according to two types of models as well as point signature method based on two kinds of depth data information.

A 3D Face Reconstruction and Tracking Method using the Estimated Depth Information (얼굴 깊이 추정을 이용한 3차원 얼굴 생성 및 추적 방법)

  • Ju, Myung-Ho;Kang, Hang-Bong
    • The KIPS Transactions:PartB
    • /
    • v.18B no.1
    • /
    • pp.21-28
    • /
    • 2011
  • A 3D face shape derived from 2D images may be useful in many applications, such as face recognition, face synthesis and human computer interaction. To do this, we develop a fast 3D Active Appearance Model (3D-AAM) method using depth estimation. The training images include specific 3D face poses which are extremely different from one another. The landmark's depth information of landmarks is estimated from the training image sequence by using the approximated Jacobian matrix. It is added at the test phase to deal with the 3D pose variations of the input face. Our experimental results show that the proposed method can efficiently fit the face shape, including the variations of facial expressions and 3D pose variations, better than the typical AAM, and can estimate accurate 3D face shape from images.

A Fast Method for Face Detection based on PCA and SVM

  • Xia, Chun-Lei;Shin, Hyeon-Gab;Ha, Seok-Wun
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2007.06a
    • /
    • pp.153-156
    • /
    • 2007
  • In this paper, we propose a fast face detection approach using PCA and SVM. In our detection system, first we filter the face potential area using statistical feature which is generated by analyzing local histogram distribution. And then, we use SVM classifier to detect whether there are faces present in the test image. Support Vector Machine (SVM) has great performance in classification task. PCA is used for dimension reduction of sample data. After PCA transform, the feature vectors, which are used for training SVM classifier, are generated. Our tests in this paper are based on CMU face database.

  • PDF

Detection and Recovery of Occluded Face Images Based on Correlation (상관관계에 기반한 가려진 얼굴 영상 검출 및 복원)

  • Lee, Ji-Eun;Kwak, No-Jun
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.5
    • /
    • pp.72-83
    • /
    • 2011
  • In this paper, we propose a method to detect and recover the occluded parts of face images using the correlation between pairs of pixels. In a training stage, correlation coefficients between every pairs of pixels are calculated using the occlusion-free face images. Once a new occluded face image is shown, the occluded area is detected and recovered using the correlation coefficients obtained in the training stage. We compare the performance of the proposed method with the conventional method based on PCA. The results show that the proposed method detects and recovers occluded area with much smaller noises than the conventional PCA based method. Moreover, recovered images by the proposed method were more smooth with reduced blurring effect.

An Automatic Smile Analysis System for Smile Self-training (자가 미소 훈련을 위한 자동 미소 분석 시스템)

  • Song, Won-Chang;Kang, Sun-Kyung;Jung, Tae-Sung
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.11
    • /
    • pp.1373-1382
    • /
    • 2011
  • In this study, we propose an automated smile analysis system for self smile training. The proposed system detects the face area from the input image with the AdaBoost algorithm, followed by identifying facial features based on the face shape model generated by using an ASM(active shpae model). Once facial features are identified, the lip line and teeth area necessary for smile analysis are detected. It is necessary to judge the relationship between the lip line and teeth for smiling degree analysis, and to this end, the second differentiation of the teeth image is carried out, and then individual the teeth areas are identified by means of histogram projection on the vertical axis and horizontal axis. An analysis of the lip line and individual the teeth areas allows for an automated analysis of smiling degree of users, enabling users to check their smiling degree on a real time basis. The developed system in this study exhibited an error of 8.6% or below, compared to previous smile analysis results released by dental clinics for smile training, and it is expected to be used directly by users for smile training.

A Secure Face Cryptogr aphy for Identity Document Based on Distance Measures

  • Arshad, Nasim;Moon, Kwang-Seok;Kim, Jong-Nam
    • Journal of Korea Multimedia Society
    • /
    • v.16 no.10
    • /
    • pp.1156-1162
    • /
    • 2013
  • Face verification has been widely studied during the past two decades. One of the challenges is the rising concern about the security and privacy of the template database. In this paper, we propose a secure face verification system which generates a unique secure cryptographic key from a face template. The face images are processed to produce face templates or codes to be utilized for the encryption and decryption tasks. The result identity data is encrypted using Advanced Encryption Standard (AES). Distance metric naming hamming distance and Euclidean distance are used for template matching identification process, where template matching is a process used in pattern recognition. The proposed system is tested on the ORL, YALEs, and PKNU face databases, which contain 360, 135, and 54 training images respectively. We employ Principle Component Analysis (PCA) to determine the most discriminating features among face images. The experimental results showed that the proposed distance measure was one the promising best measures with respect to different characteristics of the biometric systems. Using the proposed method we needed to extract fewer images in order to achieve 100% cumulative recognition than using any other tested distance measure.

Face image classification by SVM

  • Park, Hye-Jeong;Sim, Ju-Yong;Kim, Mun-Tae;O, Gwang-Sik;Kim, Dae-Hak
    • Proceedings of the Korean Statistical Society Conference
    • /
    • 2003.10a
    • /
    • pp.155-159
    • /
    • 2003
  • 최근 들어 SVM(support vector machines)은 기계학습의 분야에서 많은 응용이 이루어지고 있으며 특히 분류(classification)나 회귀(regression)분석의 영역에서 많은 연구가 진행중이다. 본 논문에서는 SVM을 이용하여 입력영상자료(image data)를 분류하고자 한다. RGB 컬러 영상자료가 입력되면 이미지 크기에 관계없이 이미지 자체를 입력패턴으로 인식하고 SVM을 통한 훈련(training)을 거친 결과(weight 들과 bias 추정치)를 이용하여 입력영상자료가 사람인가를 분류할 수 있는 문제를 다룬다. 제안된 방법의 타당성은 152개의 영상자료에 적용하여 분석되었다.

  • PDF

Animal Face Classification using Dual Deep Convolutional Neural Network

  • Khan, Rafiul Hasan;Kang, Kyung-Won;Lim, Seon-Ja;Youn, Sung-Dae;Kwon, Oh-Jun;Lee, Suk-Hwan;Kwon, Ki-Ryong
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.4
    • /
    • pp.525-538
    • /
    • 2020
  • A practical animal face classification system that classifies animals in image and video data is considered as a pivotal topic in machine learning. In this research, we are proposing a novel method of fully connected dual Deep Convolutional Neural Network (DCNN), which extracts and analyzes image features on a large scale. With the inclusion of the state of the art Batch Normalization layer and Exponential Linear Unit (ELU) layer, our proposed DCNN has gained the capability of analyzing a large amount of dataset as well as extracting more features than before. For this research, we have built our dataset containing ten thousand animal faces of ten animal classes and a dual DCNN. The significance of our network is that it has four sets of convolutional functions that work laterally with each other. We used a relatively small amount of batch size and a large number of iteration to mitigate overfitting during the training session. We have also used image augmentation to vary the shapes of the training images for the better learning process. The results demonstrate that, with an accuracy rate of 92.0%, the proposed DCNN outruns its counterparts while causing less computing costs.

Depth Image Restoration Using Generative Adversarial Network (Generative Adversarial Network를 이용한 손실된 깊이 영상 복원)

  • Nah, John Junyeop;Sim, Chang Hun;Park, In Kyu
    • Journal of Broadcast Engineering
    • /
    • v.23 no.5
    • /
    • pp.614-621
    • /
    • 2018
  • This paper proposes a method of restoring corrupted depth image captured by depth camera through unsupervised learning using generative adversarial network (GAN). The proposed method generates restored face depth images using 3D morphable model convolutional neural network (3DMM CNN) with large-scale CelebFaces Attribute (CelebA) and FaceWarehouse dataset for training deep convolutional generative adversarial network (DCGAN). The generator and discriminator equip with Wasserstein distance for loss function by utilizing minimax game. Then the DCGAN restore the loss of captured facial depth images by performing another learning procedure using trained generator and new loss function.

Design of Three-dimensional Face Recognition System Using Optimized PRBFNNs and PCA : Comparative Analysis of Evolutionary Algorithms (최적화된 PRBFNNs 패턴분류기와 PCA알고리즘을 이용한 3차원 얼굴인식 알고리즘 설계 : 진화 알고리즘의 비교 해석)

  • Oh, Sung-Kwun;Oh, Seung-Hun;Kim, Hyun-Ki
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.23 no.6
    • /
    • pp.539-544
    • /
    • 2013
  • In this paper, we was designed three-dimensional face recognition algorithm using polynomial based RBFNNs and proposed method to calculate the recognition performance. In case of two-dimensional face recognition, the recognition performance is reduced by the external environment like facial pose and lighting. In order to compensate for these shortcomings, we perform face recognition by obtaining three-dimensional images. obtain face image using three-dimension scanner before the face recognition and obtain the front facial form using pose-compensation. And the depth value of the face is extracting using Point Signature method. The extracted data as high-dimensional data may cause problems in accompany the training and recognition. so use dimension reduction data using PCA algorithm. accompany parameter optimization using optimization algorithm for effective training. Each recognition performance confirm using PSO, DE, GA algorithm.