• Title/Summary/Keyword: Face Feature detection

Search Result 314, Processing Time 0.028 seconds

Invariant Range Image Multi-Pose Face Recognition Using Fuzzy c-Means

  • Phokharatkul, Pisit;Pansang, Seri
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.1244-1248
    • /
    • 2005
  • In this paper, we propose fuzzy c-means (FCM) to solve recognition errors in invariant range image, multi-pose face recognition. Scale, center and pose error problems were solved using geometric transformation. Range image face data was digitized into range image data by using the laser range finder that does not depend on the ambient light source. Then, the digitized range image face data is used as a model to generate multi-pose data. Each pose data size was reduced by linear reduction into the database. The reduced range image face data was transformed to the gradient face model for facial feature image extraction and also for matching using the fuzzy membership adjusted by fuzzy c-means. The proposed method was tested using facial range images from 40 people with normal facial expressions. The output of the detection and recognition system has to be accurate to about 93 percent. Simultaneously, the system must be robust enough to overcome typical image-acquisition problems such as noise, vertical rotated face and range resolution.

  • PDF

The I-MCTBoost Classifier for Real-time Face Detection in Depth Image (깊이영상에서 실시간 얼굴 검출을 위한 I-MCTBoost)

  • Joo, Sung-Il;Weon, Sun-Hee;Choi, Hyung-Il
    • Journal of the Korea Society of Computer and Information
    • /
    • v.19 no.3
    • /
    • pp.25-35
    • /
    • 2014
  • This paper proposes a method of boosting-based classification for the purpose of real-time face detection. The proposed method uses depth images to ensure strong performance of face detection in response to changes in lighting and face size, and uses the depth difference feature to conduct learning and recognition through the I-MCTBoost classifier. I-MCTBoost performs recognition by connecting the strong classifiers that are constituted from weak classifiers. The learning process for the weak classifiers is as follows: first, depth difference features are generated, and eight of these features are combined to form the weak classifier, and each feature is expressed as a binary bit. Strong classifiers undergo learning through the process of repeatedly selecting a specified number of weak classifiers, and become capable of strong classification through a learning process in which the weight of the learning samples are renewed and learning data is added. This paper explains depth difference features and proposes a learning method for the weak classifiers and strong classifiers of I-MCTBoost. Lastly, the paper presents comparisons of the proposed classifiers and the classifiers using conventional MCT through qualitative and quantitative analyses to establish the feasibility and efficiency of the proposed classifiers.

Albedo Based Fake Face Detection (빛의 반사량 측정을 통한 가면 착용 위변조 얼굴 검출)

  • Kim, Young-Shin;Na, Jae-Keun;Yoon, Sung-Beak;Yi, June-Ho
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.45 no.6
    • /
    • pp.139-146
    • /
    • 2008
  • Masked fake face detection using ordinary visible images is a formidable task when the mask is accurately made with special makeup. Considering recent advances in special makeup technology, a reliable solution to detect masked fake faces is essential to the development of a complete face recognition system. This research proposes a method for masked fake face detection that exploits reflectance disparity due to object material and its surface color. First, we have shown that measuring of albedo can be simplified to radiance measurement when a practical face recognition system is deployed under the user-cooperative environment. This enables us to obtain albedo just by grey values in the image captured. Second, we have found that 850nm infrared light is effective to discriminate between facial skin and mask material using reflectance disparity. On the other hand, 650nm visible light is known to be suitable for distinguishing different facial skin colors between ethnic groups. We use a 2D vector consisting of radiance measurements under 850nm and 659nm illumination as a feature vector. Facial skin and mask material show linearly separable distributions in the feature space. By employing FIB, we have achieved 97.8% accuracy in fake face detection. Our method is applicable to faces of different skin colors, and can be easily implemented into commercial face recognition systems.

A Study on Face Recognition Using Diretional Face Shape and SOFM (방향성 얼굴형상과 SOFM을 이용한 얼굴 인식에 관한 연구)

  • Kim, Seung-Jae;Lee, Jung-Jae
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.19 no.6
    • /
    • pp.109-116
    • /
    • 2019
  • This study proposed a robust detection algorithm. It detects face more stably with respect to changes in light and rotation for the identification of a face shape. Also it satisfies both efficiency of calculation and the function of detection. The algorithm proposed segmented the face area through pre-processing using a face shape as input information in an environment with a single camera and then identified the shape using a Self Organized Feature Map(SOFM). However, as it is not easy to exactly recognize a face area which is sensitive to light, it has a large degree of freedom, and there is a large error bound, to enhance the identification rate, rotation information on the face shape was made into a database and then a principal component analysis was conducted. Also, as there were fewer calculations due to the fewer dimensions, the time for real-time identification could be decreased.

Crosswalk Detection using Feature Vectors in Road Images (특징 벡터를 이용한 도로영상의 횡단보도 검출)

  • Lee, Geun-mo;Park, Soon-Yong
    • The Journal of Korea Robotics Society
    • /
    • v.12 no.2
    • /
    • pp.217-227
    • /
    • 2017
  • Crosswalk detection is an important part of the Pedestrian Protection System in autonomous vehicles. Different methods of crosswalk detection have been introduced so far using crosswalk edge features, the distance between crosswalk blocks, laser scanning, Hough Transformation, and Fourier Transformation. However, most of these methods failed to detect crosswalks accurately, when they are damaged, faded away or partly occluded. Furthermore, these methods face difficulties when applying on real road environment where there are lot of vehicles. In this paper, we solve this problem by first using a region based binarization technique and x-axis histogram to detect the candidate crosswalk areas. Then, we apply Support Vector Machine (SVM) based classification method to decide whether the candidate areas contain a crosswalk or not. Experiment results prove that our method can detect crosswalks in different environment conditions with higher recognition rate even they are faded away or partly occluded.

3D Face Alignment and Normalization Based on Feature Detection Using Active Shape Models : Quantitative Analysis on Aligning Process (ASMs을 이용한 특징점 추출에 기반한 3D 얼굴데이터의 정렬 및 정규화 : 정렬 과정에 대한 정량적 분석)

  • Shin, Dong-Won;Park, Sang-Jun;Ko, Jae-Pil
    • Korean Journal of Computational Design and Engineering
    • /
    • v.13 no.6
    • /
    • pp.403-411
    • /
    • 2008
  • The alignment of facial images is crucial for 2D face recognition. This is the same to facial meshes for 3D face recognition. Most of the 3D face recognition methods refer to 3D alignment but do not describe their approaches in details. In this paper, we focus on describing an automatic 3D alignment in viewpoint of quantitative analysis. This paper presents a framework of 3D face alignment and normalization based on feature points obtained by Active Shape Models (ASMs). The positions of eyes and mouth can give possibility of aligning the 3D face exactly in three-dimension space. The rotational transform on each axis is defined with respect to the reference position. In aligning process, the rotational transform converts an input 3D faces with large pose variations to the reference frontal view. The part of face is flopped from the aligned face using the sphere region centered at the nose tip of 3D face. The cropped face is shifted and brought into the frame with specified size for normalizing. Subsequently, the interpolation is carried to the face for sampling at equal interval and filling holes. The color interpolation is also carried at the same interval. The outputs are normalized 2D and 3D face which can be used for face recognition. Finally, we carry two sets of experiments to measure aligning errors and evaluate the performance of suggested process.

Face recognition using PCA and face direction information (PCA와 얼굴방향 정보를 이용한 얼굴인식)

  • Kim, Seung-Jae
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.10 no.6
    • /
    • pp.609-616
    • /
    • 2017
  • In this paper, we propose an algorithm to obtain more stable and high recognition rate by using left and right rotation information of input image in order to obtain a stable recognition rate in face recognition. The proposed algorithm uses the facial image as the input information in the web camera environment to reduce the size of the image and normalize the information about the brightness and color to obtain the improved recognition rate. We apply Principal Component Analysis (PCA) to the detected candidate regions to obtain feature vectors and classify faces. Also, In order to reduce the error rate range of the recognition rate, a set of data with the left and right $45^{\circ}$ rotation information is constructed considering the directionality of the input face image, and each feature vector is obtained with PCA. In order to obtain a stable recognition rate with the obtained feature vector, it is after scattered in the eigenspace and the final face is recognized by comparing euclidean distant distances to each feature. The PCA-based feature vector is low-dimensional data, but there is no problem in expressing the face, and the recognition speed can be fast because of the small amount of calculation. The method proposed in this paper can improve the safety and accuracy of recognition and recognition rate faster than other algorithms, and can be used for real-time recognition system.

Detection of Face Expression Based on Deep Learning (딥러닝 기반의 얼굴영상에서 표정 검출에 관한 연구)

  • Won, Chulho;Lee, Bub-ki
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.8
    • /
    • pp.917-924
    • /
    • 2018
  • Recently, researches using LBP and SVM have been performed as one of the image - based methods for facial emotion recognition. LBP, introduced by Ojala et al., is widely used in the field of image recognition due to its high discrimination of objects, robustness to illumination change, and simple operation. In addition, CS(Center-Symmetric)-LBP was used as a modified form of LBP, which is widely used for face recognition. In this paper, we propose a method to detect four facial expressions such as expressionless, happiness, surprise, and anger using deep neural network. The validity of the proposed method is verified using accuracy. Based on the existing LBP feature parameters, it was confirmed that the method using the deep neural network is superior to the method using the Adaboost and SVM classifier.

The Implementation of Face Authentication System Using Real-Time Image Processing (실시간 영상처리를 이용한 얼굴 인증 시스템 구현)

  • Baek, Young-Hyun;Shin, Seong;Moon, Sung-Ryong
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.2
    • /
    • pp.193-199
    • /
    • 2008
  • In this paper, it is proposed the implementation of face authentication system based on real-time image processing. We described the process implementing the two steps for real-time face authentication system. At first face detection steps, we describe the face detection by using feature of wavelet transform, LoG operator and hausdorff distance matching. In the second step we describe the new dual-line principal component analysis(PCA) for real-time face recognition. It is combines horizontal line to vertical line so as to accept local changes of PCA. The proposed system is affected a little by the video size and resolution. And then simulation results confirm the effectiveness of out system and demonstrate its superiority to other conventional algorithm. Finally, the possibility of performance evaluation and real-time processing was confirmed through the implementation of face authentication system.

An Efficient Face Region Detection for Content-based Video Summarization (내용기반 비디오 요약을 위한 효율적인 얼굴 객체 검출)

  • Kim Jong-Sung;Lee Sun-Ta;Baek Joong-Hwan
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.7C
    • /
    • pp.675-686
    • /
    • 2005
  • In this paper, we propose an efficient face region detection technique for the content-based video summarization. To segment video, shot changes are detected from a video sequence and key frames are selected from the shots. We select one frame that has the least difference between neighboring frames in each shot. The proposed face detection algorithm detects face region from selected key frames. And then, we provide user with summarized frames included face region that has an important meaning in dramas or movies. Using Bayes classification rule and statistical characteristic of the skin pixels, face regions are detected in the frames. After skin detection, we adopt the projection method to segment an image(frame) into face region and non-face region. The segmented regions are candidates of the face object and they include many false detected regions. So, we design a classifier to minimize false lesion using CART. From SGLD matrices, we extract the textual feature values such as Inertial, Inverse Difference, and Correlation. As a result of our experiment, proposed face detection algorithm shows a good performance for the key frames with a complex and variant background. And our system provides key frames included the face region for user as video summarized information.