• Title/Summary/Keyword: Training Face Image

Search Result 125, Processing Time 0.029 seconds

Long Distance Face Recognition System using the Automatic Face Image Creation by Distance (거리별 얼굴영상 자동 생성 방법을 이용한 원거리 얼굴인식 시스템)

  • Moon, Hae Min;Pan, Sung Bum
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.11
    • /
    • pp.137-145
    • /
    • 2014
  • This paper suggests an LDA-based long distance face recognition algorithm for intelligent surveillance system. The existing face recognition algorithm using single distance face image as training images caused a problem that face recognition rate is decreased with increasing distance. The face recognition algorithm using face images by actual distance as training images showed good performance. However, this also causes user inconvenience as it requires the user to move one to five meters in person to acquire face images for initial user registration. In this paper, proposed method is used for training images by using single distance face image to automatically create face images by various distances. The test result showed that the proposed face recognition technique generated better performance by average 16.3% in short distance and 18.0% in long distance than the technique using the existing single distance face image as training. When it was compared with the technique that used face images by distance as training, the performance fell 4.3% on average at a close distance and remained the same at a long distance.

The Long Distance Face Recognition using Multiple Distance Face Images Acquired from a Zoom Camera (줌 카메라를 통해 획득된 거리별 얼굴 영상을 이용한 원거리 얼굴 인식 기술)

  • Moon, Hae-Min;Pan, Sung Bum
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.24 no.6
    • /
    • pp.1139-1145
    • /
    • 2014
  • User recognition technology, which identifies or verifies a certain individual is absolutely essential under robotic environments for intelligent services. The conventional face recognition algorithm using single distance face image as training images has a problem that face recognition rate decreases as distance increases. The face recognition algorithm using face images by actual distance as training images shows good performance but this has a problem that it requires user cooperation. This paper proposes the LDA-based long distance face recognition method which uses multiple distance face images from a zoom camera for training face images. The proposed face recognition technique generated better performance by average 7.8% than the technique using the existing single distance face image as training. Compared with the technique that used face images by distance as training, the performance fell average 8.0%. However, the proposed method has a strength that it spends less time and requires less cooperation to users when taking face images.

Multi-Face Detection on static image using Principle Component Analysis

  • Choi, Hyun-Chul;Oh, Se-Young
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.185-189
    • /
    • 2004
  • For face recognition system, a face detector which can find exact face region from complex image is needed. Many face detection algorithms have been developed under the assumption that background of the source image is quite simple . this means that face region occupy more than a quarter of the area of the source image or the background is one-colored. Color-based face detection is fast but can't be applicable to the images of which the background color is similar to face color. And the algorithm using neural network needs so many non-face data for training and doesn't guarantee general performance. In this paper, A multi-scale, multi-face detection algorithm using PCA is suggested. This algorithm can find most multi-scaled faces contained in static images with small number of training data in reasonable time.

  • PDF

Robust Face Recognition under Limited Training Sample Scenario using Linear Representation

  • Iqbal, Omer;Jadoon, Waqas;ur Rehman, Zia;Khan, Fiaz Gul;Nazir, Babar;Khan, Iftikhar Ahmed
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.7
    • /
    • pp.3172-3193
    • /
    • 2018
  • Recently, several studies have shown that linear representation based approaches are very effective and efficient for image classification. One of these linear-representation-based approaches is the Collaborative representation (CR) method. The existing algorithms based on CR have two major problems that degrade their classification performance. First problem arises due to the limited number of available training samples. The large variations, caused by illumintion and expression changes, among query and training samples leads to poor classification performance. Second problem occurs when an image is partially noised (contiguous occlusion), as some part of the given image become corrupt the classification performance also degrades. We aim to extend the collaborative representation framework under limited training samples face recognition problem. Our proposed solution will generate virtual samples and intra-class variations from training data to model the variations effectively between query and training samples. For robust classification, the image patches have been utilized to compute representation to address partial occlusion as it leads to more accurate classification results. The proposed method computes representation based on local regions in the images as opposed to CR, which computes representation based on global solution involving entire images. Furthermore, the proposed solution also integrates the locality structure into CR, using Euclidian distance between the query and training samples. Intuitively, if the query sample can be represented by selecting its nearest neighbours, lie on a same linear subspace then the resulting representation will be more discriminate and accurately classify the query sample. Hence our proposed framework model the limited sample face recognition problem into sufficient training samples problem using virtual samples and intra-class variations, generated from training samples that will result in improved classification accuracy as evident from experimental results. Moreover, it compute representation based on local image patches for robust classification and is expected to greatly increase the classification performance for face recognition task.

Robust Minimum Squared Error Classification Algorithm with Applications to Face Recognition

  • Liu, Zhonghua;Yang, Chunlei;Pu, Jiexin;Liu, Gang;Liu, Sen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.1
    • /
    • pp.308-320
    • /
    • 2016
  • Although the face almost always has an axisymmetric structure, it is generally not symmetrical image for the face image. However, the mirror image of the face image can reflect possible variation of the poses and illumination opposite to that of the original face image. A robust minimum squared error classification (RMSEC) algorithm is proposed in this paper. Concretely speaking, the original training samples and the mirror images of the original samples are taken to form a new training set, and the generated training set is used to perform the modified minimum sqreared error classification(MMSEC) algorithm. The extensive experiments show that the accuracy rate of the proposed RMSEC is greatly increased, and the the proposed RMSEC is not sensitive to the variations of the parameters.

Eyeglass Remover Network based on a Synthetic Image Dataset

  • Kang, Shinjin;Hahn, Teasung
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.4
    • /
    • pp.1486-1501
    • /
    • 2021
  • The removal of accessories from the face is one of the essential pre-processing stages in the field of face recognition. However, despite its importance, a robust solution has not yet been provided. This paper proposes a network and dataset construction methodology to remove only the glasses from facial images effectively. To obtain an image with the glasses removed from an image with glasses by the supervised learning method, a network that converts them and a set of paired data for training is required. To this end, we created a large number of synthetic images of glasses being worn using facial attribute transformation networks. We adopted the conditional GAN (cGAN) frameworks for training. The trained network converts the in-the-wild face image with glasses into an image without glasses and operates stably even in situations wherein the faces are of diverse races and ages and having different styles of glasses.

Generation of Masked Face Image Using Deep Convolutional Autoencoder (컨볼루션 오토인코더를 이용한 마스크 착용 얼굴 이미지 생성)

  • Lee, Seung Ho
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.8
    • /
    • pp.1136-1141
    • /
    • 2022
  • Researches of face recognition on masked faces have been increasingly important due to the COVID-19 pandemic. To realize a stable and practical recognition performance, large amount of facial image data should be acquired for the purpose of training. However, it is difficult for the researchers to obtain masked face images for each human subject. This paper proposes a novel method to synthesize a face image and a virtual mask pattern. In this method, a pair of masked face image and unmasked face image, that are from a single human subject, is fed into a convolutional autoencoder as training data. This allows learning the geometric relationship between face and mask. In the inference step, for a unseen face image, the learned convolutional autoencoder generates a synthetic face image with a mask pattern. The proposed method is able to rapidly generate realistic masked face images. Also, it could be practical when compared to methods which rely on facial feature point detection.

A study on Face Image Classification for Efficient Face Detection Using FLD

  • Nam, Mi-Young;Kim, Kwang-Baek
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2004.05a
    • /
    • pp.106-109
    • /
    • 2004
  • Many reported methods assume that the faces in an image or an image sequence have been identified and localization. Face detection from image is a challenging task because of variability in scale, location, orientation and pose. In this paper, we present an efficient linear discriminant for multi-view face detection. Our approaches are based on linear discriminant. We define training data with fisher linear discriminant to efficient learning method. Face detection is considerably difficult because it will be influenced by poses of human face and changes in illumination. This idea can solve the multi-view and scale face detection problem poses. Quickly and efficiently, which fits for detecting face automatically. In this paper, we extract face using fisher linear discriminant that is hierarchical models invariant pose and background. We estimation the pose in detected face and eye detect. The purpose of this paper is to classify face and non-face and efficient fisher linear discriminant..

  • PDF

Research and Optimization of Face Detection Algorithm Based on MTCNN Model in Complex Environment (복잡한 환경에서 MTCNN 모델 기반 얼굴 검출 알고리즘 개선 연구)

  • Fu, Yumei;Kim, Minyoung;Jang, Jong-wook
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.1
    • /
    • pp.50-56
    • /
    • 2020
  • With the rapid development of deep neural network theory and application research, the effect of face detection has been improved. However, due to the complexity of deep neural network calculation and the high complexity of the detection environment, how to detect face quickly and accurately becomes the main problem. This paper is based on the relatively simple model of the MTCNN model, using FDDB (Face Detection Dataset and Benchmark Homepage), LFW (Field Label Face) and FaceScrub public datasets as training samples. At the same time of sorting out and introducing MTCNN(Multi-Task Cascaded Convolutional Neural Network) model, it explores how to improve training speed and Increase performance at the same time. In this paper, the dynamic image pyramid technology is used to replace the traditional image pyramid technology to segment samples, and OHEM (the online hard example mine) function in MTCNN model is deleted in training, so as to improve the training speed.

Study on The Confidence Level of PCA-based Face Recognition Under Variable illumination Condition (조명 변화 환경에서 PCA 기반 얼굴인식 알고리즘의 신뢰도에 대한 연구)

  • Cho, Hyun-Jong;Kang, Min-Koo;Moon, Seung-Bin
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.46 no.2
    • /
    • pp.19-26
    • /
    • 2009
  • This paper studies on the recognition rate change with respect to illumination variance and the confidence level of PCA(Principal Component Analysis) based face recognition by measuring the cumulative match score of CMC(Cumulative Match Characteristic). We studied on the confidence level of the algorithm under illumination changes and selection of training images not only by testing multiple training images per person with illumination variance and single training image and but also by changing the illumination conditions of testing images. The experiment shows that the recognition rate drops for multiple training image case compared to single training image case. We, however, confirmed the confidence level of the algorithm under illumination variance by the fact that the training image which corresponds to the identity of testing image belongs to upper similarity lists regardless of illumination changes and the number of training images.