• Title/Summary/Keyword: face feature

Search Result 883, Processing Time 0.036 seconds

Face Recognition Using A New Methodology For Independent Component Analysis (새로운 독립 요소 해석 방법론에 의한 얼굴 인식)

  • 류재흥;고재흥
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2000.11a
    • /
    • pp.305-309
    • /
    • 2000
  • In this paper, we presents a new methodology for face recognition after analysing conventional ICA(Independent Component Analysis) based approach. In the literature we found that ICA based methods have followed the same procedure without any exception, first PCA(Principal Component Analysis) has been used for feature extraction, next ICA learning method has been applied for feature enhancement in the reduced dimension. However, it is contradiction that features are extracted using higher order moments depend on variance, the second order statistics. It is not considered that a necessary component can be located in the discarded feature space. In the new methodology, features are extracted using the magnitude of kurtosis(4-th order central moment or cumulant). This corresponds to the PCA based feature extraction using eigenvalue(2nd order central moment or variance). The synergy effect of PCA and ICA can be achieved if PCA is used for noise reduction filter. ICA methodology is analysed using SVD(Singular Value Decomposition). PCA does whitening and noise reduction. ICA performs the feature extraction. Simulation results show the effectiveness of the methodology compared to the conventional ICA approach.

  • PDF

Improved Face Detection Algorithm Using Face Verification (얼굴 검증을 이용한 개선된 얼굴 검출)

  • Oh, Jeong-su
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.22 no.10
    • /
    • pp.1334-1339
    • /
    • 2018
  • Viola & Jones's face detection algorithm is a typical face detection algorithm and shows excellent face detection performance. However, the Viola & Jones's algorithm in images including many faces generates undetected faces and wrong detected faces, such as false faces and duplicate detected faces, due to face diversity. This paper proposes an improved face detection algorithm using a face verification algorithm that eliminates the false detected faces generated from the Viola & Jones's algorithm. The proposed face verification algorithm verifies whether the detected face is valid by evaluating its size, its skin color in the designated area, its edges generated from eyes and mouth, and its duplicate detection. In the face verification experiment of 658 face images detected by the Viola & Jones's algorithm, the proposed face verification algorithm shows that all the face images created in the real person are verified.

Face Recognition Using Automatic Face Enrollment and Update for Access Control in Apartment Building Entrance (아파트 공동현관 출입 통제를 위한 자동 얼굴 등록 및 갱신 기반 얼굴인식)

  • Lee, Seung Ho
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.9
    • /
    • pp.1152-1157
    • /
    • 2021
  • This paper proposes a face recognition method for access control of apartment building. Different from most existing face recognition methods, the proposed one does not require any manual process for face enrollment. When a person is exiting through the main entrance door, his/her face data (i.e., face image and face feature) are automatically extracted from the captured video and registered in the database. When the person needs to enter the building again, the face data are extracted and the corresponding face feature is compared with the face features registered in the database. If a matching person exists, the entrance door opens and his/her access is allowed. The face data of the matching person are immediately deleted and the database has the latest face data of outgoing person. Thus, a higher recognition accuracy could be expected. To verify the feasibility of the proposed method, Python based face recognition has been implemented and the cloud service provided by a web portal.

Robust Extraction of Facial Features under Illumination Variations (조명 변화에 견고한 얼굴 특징 추출)

  • Jung Sung-Tae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.10 no.6 s.38
    • /
    • pp.1-8
    • /
    • 2005
  • Facial analysis is used in many applications like face recognition systems, human-computer interface through head movements or facial expressions, model based coding, or virtual reality. In all these applications a very precise extraction of facial feature points are necessary. In this paper we presents a method for automatic extraction of the facial features Points such as mouth corners, eye corners, eyebrow corners. First, face region is detected by AdaBoost-based object detection algorithm. Then a combination of three kinds of feature energy for facial features are computed; valley energy, intensity energy and edge energy. After feature area are detected by searching horizontal rectangles which has high feature energy. Finally, a corner detection algorithm is applied on the end region of each feature area. Because we integrate three feature energy and the suggested estimation method for valley energy and intensity energy are adaptive to the illumination change, the proposed feature extraction method is robust under various conditions.

  • PDF

Automatic Camera Pose Determination from a Single Face Image

  • Wei, Li;Lee, Eung-Joo;Ok, Soo-Yol;Bae, Sung-Ho;Lee, Suk-Hwan;Choo, Young-Yeol;Kwon, Ki-Ryong
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.12
    • /
    • pp.1566-1576
    • /
    • 2007
  • Camera pose information from 2D face image is very important for making virtual 3D face model synchronize with the real face. It is also very important for any other uses such as: human computer interface, 3D object estimation, automatic camera control etc. In this paper, we have presented a camera position determination algorithm from a single 2D face image using the relationship between mouth position information and face region boundary information. Our algorithm first corrects the color bias by a lighting compensation algorithm, then we nonlinearly transformed the image into $YC_bC_r$ color space and use the visible chrominance feature of face in this color space to detect human face region. And then for face candidate, use the nearly reversed relationship information between $C_b\;and\;C_r$ cluster of face feature to detect mouth position. And then we use the geometrical relationship between mouth position information and face region boundary information to determine rotation angles in both x-axis and y-axis of camera position and use the relationship between face region size information and Camera-Face distance information to determine the camera-face distance. Experimental results demonstrate the validity of our algorithm and the correct determination rate is accredited for applying it into practice.

  • PDF

Real Time Discrimination of 3 Dimensional Face Pose (실시간 3차원 얼굴 방향 식별)

  • Kim, Tae-Woo
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.3 no.1
    • /
    • pp.47-52
    • /
    • 2010
  • In this paper, we introduce a new approach for real-time 3D face pose discrimination based on active IR illumination from a monocular view of the camera. Under the IR illumination, the pupils appear bright. We develop algorithms for efficient and robust detection and tracking pupils in real time. Based on the geometric distortions of pupils under different face orientations, an eigen eye feature space is built based on training data that captures the relationship between 3D face orientation and the geometric features of the pupils. The 3D face pose for an input query image is subsequently classified using the eigen eye feature space. From the experiment, we obtained the range of results of discrimination from the subjects which close to the camera are from 94,67%, minimum from 100%, maximum.

  • PDF

Face Recognition System for Unattended reception interface (무인 접수 인터페이스를 위한 얼굴인식 시스템)

  • Park, Se-Hyun;Ryu, Jeong-Tak
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.17 no.3
    • /
    • pp.1-7
    • /
    • 2012
  • As personal information is utilized as an important user authentication means, a trustable certification means is being required. Recently, a research on the biometrics system using a part of the human body like a password is being attempted a lot. The face recognition technology using characteristics of the personal face among several biometrics technologies is easy in extracting features. In this paper, we implement a face recognition system for unattended reception interface. Our method is performed by two steps. Firstly the face is extracted using Haar-like feature method. Secondly the method combining PCA and LDA for face recognition was used. To assess the effectiveness of the proposed system, it was tested and experimental results show that the proposed method is applicable for unattended reception interface.

A New Confidence Measure for Eye Detection Using Pixel Selection (눈 검출에서의 픽셀 선택을 이용한 신뢰 척도)

  • Lee, Yonggeol;Choi, Sang-Il
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.4 no.7
    • /
    • pp.291-296
    • /
    • 2015
  • In this paper, we propose a new confidence measure using pixel selection for eye detection and design a hybrid eye detector. For this, we produce sub-images by applying a pixel selection method to the eye patches and construct the BDA(Biased Discriminant Analysis) feature space for measuring the confidence of the eye detection results. For a hybrid eye detector, we select HFED(Haar-like Feature based Eye Detector) and MFED(MCT Feature based Eye Detector), which are complementary to each other, as basic detectors. For a given image, each basic detector conducts eye detection and the confidence of each result is estimated in the BDA feature space by calculating the distances between the produced eye patches and the mean of positive samples in the training set. Then, the result with higher confidence is adopted as the final eye detection result and is used to the face alignment process for face recognition. The experimental results for various face databases show that the proposed method performs more accurate eye detection and consequently results in better face recognition performance compared with other methods.

A Vision-based Approach for Facial Expression Cloning by Facial Motion Tracking

  • Chun, Jun-Chul;Kwon, Oryun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.2 no.2
    • /
    • pp.120-133
    • /
    • 2008
  • This paper presents a novel approach for facial motion tracking and facial expression cloning to create a realistic facial animation of a 3D avatar. The exact head pose estimation and facial expression tracking are critical issues that must be solved when developing vision-based computer animation. In this paper, we deal with these two problems. The proposed approach consists of two phases: dynamic head pose estimation and facial expression cloning. The dynamic head pose estimation can robustly estimate a 3D head pose from input video images. Given an initial reference template of a face image and the corresponding 3D head pose, the full head motion is recovered by projecting a cylindrical head model onto the face image. It is possible to recover the head pose regardless of light variations and self-occlusion by updating the template dynamically. In the phase of synthesizing the facial expression, the variations of the major facial feature points of the face images are tracked by using optical flow and the variations are retargeted to the 3D face model. At the same time, we exploit the RBF (Radial Basis Function) to deform the local area of the face model around the major feature points. Consequently, facial expression synthesis is done by directly tracking the variations of the major feature points and indirectly estimating the variations of the regional feature points. From the experiments, we can prove that the proposed vision-based facial expression cloning method automatically estimates the 3D head pose and produces realistic 3D facial expressions in real time.

A Study on Deep Learning Structure of Multi-Block Method for Improving Face Recognition (얼굴 인식률 향상을 위한 멀티 블록 방식의 딥러닝 구조에 관한 연구)

  • Ra, Seung-Tak;Kim, Hong-Jik;Lee, Seung-Ho
    • Journal of IKEEE
    • /
    • v.22 no.4
    • /
    • pp.933-940
    • /
    • 2018
  • In this paper, we propose a multi-block deep learning structure for improving face recognition rate. The recognition structure of the proposed deep learning consists of three steps: multi-blocking of the input image, multi-block selection by facial feature numerical analysis, and perform deep learning of the selected multi-block. First, the input image is divided into 4 blocks by multi-block. Secondly, in the multi-block selection by feature analysis, the feature values of the quadruple multi-blocks are checked, and only the blocks with many features are selected. The third step is to perform deep learning with the selected multi-block, and the result is obtained as an efficient block with high feature value by performing recognition on the deep learning model in which the selected multi-block part is learned. To evaluate the performance of the proposed deep learning structure, we used CAS-PEAL face database. Experimental results show that the proposed multi-block deep learning structure shows 2.3% higher face recognition rate than the existing deep learning structure.