• Title/Summary/Keyword: Face Features

Search Result 875, Processing Time 0.028 seconds

FRS-OCC: Face Recognition System for Surveillance Based on Occlusion Invariant Technique

  • Abbas, Qaisar
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.8
    • /
    • pp.288-296
    • /
    • 2021
  • Automated face recognition in a runtime environment is gaining more and more important in the fields of surveillance and urban security. This is a difficult task keeping in mind the constantly volatile image landscape with varying features and attributes. For a system to be beneficial in industrial settings, it is pertinent that its efficiency isn't compromised when running on roads, intersections, and busy streets. However, recognition in such uncontrolled circumstances is a major problem in real-life applications. In this paper, the main problem of face recognition in which full face is not visible (Occlusion). This is a common occurrence as any person can change his features by wearing a scarf, sunglass or by merely growing a mustache or beard. Such types of discrepancies in facial appearance are frequently stumbled upon in an uncontrolled circumstance and possibly will be a reason to the security systems which are based upon face recognition. These types of variations are very common in a real-life environment. It has been analyzed that it has been studied less in literature but now researchers have a major focus on this type of variation. Existing state-of-the-art techniques suffer from several limitations. Most significant amongst them are low level of usability and poor response time in case of any calamity. In this paper, an improved face recognition system is developed to solve the problem of occlusion known as FRS-OCC. To build the FRS-OCC system, the color and texture features are used and then an incremental learning algorithm (Learn++) to select more informative features. Afterward, the trained stack-based autoencoder (SAE) deep learning algorithm is used to recognize a human face. Overall, the FRS-OCC system is used to introduce such algorithms which enhance the response time to guarantee a benchmark quality of service in any situation. To test and evaluate the performance of the proposed FRS-OCC system, the AR face dataset is utilized. On average, the FRS-OCC system is outperformed and achieved SE of 98.82%, SP of 98.49%, AC of 98.76% and AUC of 0.9995 compared to other state-of-the-art methods. The obtained results indicate that the FRS-OCC system can be used in any surveillance application.

De-Identified Face Image Generation within Face Verification for Privacy Protection (프라이버시 보호를 위한 얼굴 인증이 가능한 비식별화 얼굴 이미지 생성 연구)

  • Jung-jae Lee;Hyun-sik Na;To-min Ok;Dae-seon Choi
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.2
    • /
    • pp.201-210
    • /
    • 2023
  • Deep learning-based face verificattion model show high performance and are used in many fields, but there is a possibility the user's face image may be leaked in the process of inputting the face image to the model. Althoughde-identification technology exists as a method for minimizing the exposure of face features, there is a problemin that verification performance decreases when the existing technology is applied. In this paper, after combining the face features of other person, a de-identified face image is created through StyleGAN. In addition, we propose a method of optimizingthe combining ratio of features according to the face verification model using HopSkipJumpAttack. We visualize the images generated by the proposed method to check the de-identification performance, and evaluate the ability to maintain the performance of the face verification model through experiments. That is, face verification can be performed using the de-identified image generated through the proposed method, and leakage of face personal information can be prevented.

Physiological Neuro-Fuzzy Learning Algorithm for Face Recognition

  • Kim, Kwang-Baek;Woo, Young-Woon;Park, Hyun-Jung
    • Journal of information and communication convergence engineering
    • /
    • v.5 no.1
    • /
    • pp.50-53
    • /
    • 2007
  • This paper presents face features detection and a new physiological neuro-fuzzy learning method by using two-dimensional variances based on variation of gray level and by learning for a statistical distribution of the detected face features. This paper reports a method to learn by not using partial face image but using global face image. Face detection process of this method is performed by describing differences of variance change between edge region and stationary region by gray-scale variation of global face having featured regions including nose, mouse, and couple of eyes. To process the learning stage, we use the input layer obtained by statistical distribution of the featured regions for performing the new physiological neuro-fuzzy algorithm.

Rotated face detection based on sharing features (특징들의 공유에 의한 기울어진 얼굴 검출)

  • Song, Young-Mo;Ko, Yun-Ho
    • Proceedings of the IEEK Conference
    • /
    • 2009.05a
    • /
    • pp.31-33
    • /
    • 2009
  • Face detection using AdaBoost algorithm is capable of processing images rapidly while having high detection rates. It seemed to be the fastest and the most robust and it is still today. Many improvements or extensions of this method have been proposed. However, previous approaches only deal with upright faces. They suffer from limited discriminant capability for rotated faces as these methods apply the same features for both upright and rotated faces. To solve this problem, it is necessary that we rotate input images or make independently trained detectors. However, this can be slow and can require a lot of training data, since each classifier requires the computation of many different image features. This paper proposes a robust algorithm for finding rotated faces within an image. It reduces the computational and sample complexity, by finding common features that can be shared across the classes. And it will be able to apply with multi-class object detection.

  • PDF

Human Face Recognition System Based on Skin Color Informations and Geometrical Feature Analysis of Face (피부색 정보와 얼굴의 구조적 특징 분석을 통한 얼굴 영상 인식 시스템)

  • Lee Eung- Joo
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.1 no.1
    • /
    • pp.42-48
    • /
    • 2000
  • In this paper, we propose the face image recognition algorithm using skin color information, face region features such as eye, nose, and mouse, etc., and geometrical features of chin line. In the proposed algorithm, we used the intensity as well as skin color information in the HSI color coordinate which is similar to human eye system. The experimental results of proposed method shows improved extraction quality of face and provides adaptive extraction methods for the races. And also, we used chin line information as well as geometrical features of face such as eye, nose, mouse information for the improvement of face recognition quality, Experimental results shows the more improved recognition as well as extraction quality than conventional methods.

  • PDF

Face Detection using Zernike Moments (Zernike 모멘트를 이용한 얼굴 검출)

  • Lee, Daeho
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.2
    • /
    • pp.179-186
    • /
    • 2007
  • This paper proposes a novel method for face detection method using Zernike moments. To detect the faces in an image, local regions in multiscale sliding windows are classified into face and non-face by a neural network, and input features of the neural network consist of Zernike moments. Feature dimension is reduced as the reconstruction capability of orthogonal moment. In addition, because the magnitude of Zernike moment is invariant to rotation, a tilted human face can be detected. Even so the detection rate of the proposed method about head on face is less than experiments using intensity features, the result of our method about rotated faces is more robust. If the additional compensation and features are utilized, the proposed scheme may be best suited for the later stage of classification.

  • PDF

Compressed Ensemble of Deep Convolutional Neural Networks with Global and Local Facial Features for Improved Face Recognition (얼굴인식 성능 향상을 위한 얼굴 전역 및 지역 특징 기반 앙상블 압축 심층합성곱신경망 모델 제안)

  • Yoon, Kyung Shin;Choi, Jae Young
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.8
    • /
    • pp.1019-1029
    • /
    • 2020
  • In this paper, we propose a novel knowledge distillation algorithm to create an compressed deep ensemble network coupled with the combined use of local and global features of face images. In order to transfer the capability of high-level recognition performances of the ensemble deep networks to a single deep network, the probability for class prediction, which is the softmax output of the ensemble network, is used as soft target for training a single deep network. By applying the knowledge distillation algorithm, the local feature informations obtained by training the deep ensemble network using facial subregions of the face image as input are transmitted to a single deep network to create a so-called compressed ensemble DCNN. The experimental results demonstrate that our proposed compressed ensemble deep network can maintain the recognition performance of the complex ensemble deep networks and is superior to the recognition performance of a single deep network. In addition, our proposed method can significantly reduce the storage(memory) space and execution time, compared to the conventional ensemble deep networks developed for face recognition.

Identification System Based on Partial Face Feature Extraction (부분 얼굴 특징 추출에 기반한 신원 확인 시스템)

  • Choi, Sun-Hyung;Cho, Seong-Won;Chung, Sun-Tae
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.22 no.2
    • /
    • pp.168-173
    • /
    • 2012
  • This paper presents a new human identification algorithm using partial features of the uncovered portion of face when a person wears a mask. After the face area is detected, the feature is extracted from the eye area above the mask. The identification process is performed by comparing the acquired one with the registered features. For extracting features SIFT(scale invariant feature transform) algorithm is used. The extracted features are independent of brightness and size- and rotation-invariant for the image. The experiment results show the effectiveness of the suggested algorithm.

Multi-view Human Recognition based on Face and Gait Features Detection

  • Nguyen, Anh Viet;Yu, He Xiao;Shin, Jae-Ho;Park, Sang-Yun;Lee, Eung-Joo
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.12
    • /
    • pp.1676-1687
    • /
    • 2008
  • In this paper, we proposed a new multi-view human recognition method based on face and gait features detection algorithm. For getting the position of moving object, we used the different of two consecutive frames. And then, base on the extracted object, the first important characteristic, walking direction, will be determined by using the contour of head and shoulder region. If this individual appears in camera with frontal direction, we will use the face features for recognition. The face detection technique is based on the combination of skin color and Haar-like feature whereas eigen-images and PCA are used in the recognition stage. In the other case, if the walking direction is frontal view, gait features will be used. To evaluate the effect of this proposed and compare with another method, we also present some simulation results which are performed in indoor and outdoor environment. Experimental result shows that the proposed algorithm has better recognition efficiency than the conventional sing]e view recognition method.

  • PDF

Facial Features and Motion Recovery using multi-modal information and Paraperspective Camera Model (다양한 형식의 얼굴정보와 준원근 카메라 모델해석을 이용한 얼굴 특징점 및 움직임 복원)

  • Kim, Sang-Hoon
    • The KIPS Transactions:PartB
    • /
    • v.9B no.5
    • /
    • pp.563-570
    • /
    • 2002
  • Robust extraction of 3D facial features and global motion information from 2D image sequence for the MPEG-4 SNHC face model encoding is described. The facial regions are detected from image sequence using multi-modal fusion technique that combines range, color and motion information. 23 facial features among the MPEG-4 FDP (Face Definition Parameters) are extracted automatically inside the facial region using color transform (GSCD, BWCD) and morphological processing. The extracted facial features are used to recover the 3D shape and global motion of the object using paraperspective camera model and SVD (Singular Value Decomposition) factorization method. A 3D synthetic object is designed and tested to show the performance of proposed algorithm. The recovered 3D motion information is transformed into global motion parameters of FAP (Face Animation Parameters) of the MPEG-4 to synchronize a generic face model with a real face.