• Title/Summary/Keyword: facial component features

Search Result 46, Processing Time 0.03 seconds

A Lip Detection Algorithm Using Color Clustering (색상 군집화를 이용한 입술탐지 알고리즘)

  • Jeong, Jongmyeon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.19 no.3
    • /
    • pp.37-43
    • /
    • 2014
  • In this paper, we propose a robust lip detection algorithm using color clustering. At first, we adopt AdaBoost algorithm to extract facial region and convert facial region into Lab color space. Because a and b components in Lab color space are known as that they could well express lip color and its complementary color, we use a and b component as the features for color clustering. The nearest neighbour clustering algorithm is applied to separate the skin region from the facial region and K-Means color clustering is applied to extract lip-candidate region. Then geometric characteristics are used to extract final lip region. The proposed algorithm can detect lip region robustly which has been shown by experimental results.

Curvature and Histogram of oriented Gradients based 3D Face Recognition using Linear Discriminant Analysis

  • Lee, Yeunghak
    • Journal of Multimedia Information System
    • /
    • v.2 no.1
    • /
    • pp.171-178
    • /
    • 2015
  • This article describes 3 dimensional (3D) face recognition system using histogram of oriented gradients (HOG) based on face curvature. The surface curvatures in the face contain the most important personal feature information. In this paper, 3D face images are recognized by the face components: cheek, eyes, mouth, and nose. For the proposed approach, the first step uses the face curvatures which present the facial features for 3D face images, after normalization using the singular value decomposition (SVD). Fisherface method is then applied to each component curvature face. The reason for adapting the Fisherface method maintains the surface attribute for the face curvature, even though it can generate reduced image dimension. And histogram of oriented gradients (HOG) descriptor is one of the state-of-art methods which have been shown to significantly outperform the existing feature set for several objects detection and recognition. In the last step, the linear discriminant analysis is explained for each component. The experimental results showed that the proposed approach leads to higher detection accuracy rate than other methods.

Evaluation of Histograms Local Features and Dimensionality Reduction for 3D Face Verification

  • Ammar, Chouchane;Mebarka, Belahcene;Abdelmalik, Ouamane;Salah, Bourennane
    • Journal of Information Processing Systems
    • /
    • v.12 no.3
    • /
    • pp.468-488
    • /
    • 2016
  • The paper proposes a novel framework for 3D face verification using dimensionality reduction based on highly distinctive local features in the presence of illumination and expression variations. The histograms of efficient local descriptors are used to represent distinctively the facial images. For this purpose, different local descriptors are evaluated, Local Binary Patterns (LBP), Three-Patch Local Binary Patterns (TPLBP), Four-Patch Local Binary Patterns (FPLBP), Binarized Statistical Image Features (BSIF) and Local Phase Quantization (LPQ). Furthermore, experiments on the combinations of the four local descriptors at feature level using simply histograms concatenation are provided. The performance of the proposed approach is evaluated with different dimensionality reduction algorithms: Principal Component Analysis (PCA), Orthogonal Locality Preserving Projection (OLPP) and the combined PCA+EFM (Enhanced Fisher linear discriminate Model). Finally, multi-class Support Vector Machine (SVM) is used as a classifier to carry out the verification between imposters and customers. The proposed method has been tested on CASIA-3D face database and the experimental results show that our method achieves a high verification performance.

Welfare Interface using Multiple Facial Features Tracking (다중 얼굴 특징 추적을 이용한 복지형 인터페이스)

  • Ju, Jin-Sun;Shin, Yun-Hee;Kim, Eun-Yi
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.45 no.1
    • /
    • pp.75-83
    • /
    • 2008
  • We propose a welfare interface using multiple fecial features tracking, which can efficiently implement various mouse operations. The proposed system consist of five modules: face detection, eye detection, mouth detection, facial feature tracking, and mouse control. The facial region is first obtained using skin-color model and connected-component analysis(CCs). Thereafter the eye regions are localized using neutral network(NN)-based texture classifier that discriminates the facial region into eye class and non-eye class, and then mouth region is localized using edge detector. Once eye and mouth regions are localized they are continuously and correctly tracking by mean-shift algorithm and template matching, respectively. Based on the tracking results, mouse operations such as movement or click are implemented. To assess the validity of the proposed system, it was applied to the interface system for web browser and was tested on a group of 25 users. The results show that our system have the accuracy of 99% and process more than 21 frame/sec on PC for the $320{\times}240$ size input image, as such it can supply a user-friendly and convenient access to a computer in real-time operation.

Optimized patch feature extraction using CNN for emotion recognition (감정 인식을 위해 CNN을 사용한 최적화된 패치 특징 추출)

  • Irfan Haider;Aera kim;Guee-Sang Lee;Soo-Hyung Kim
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.05a
    • /
    • pp.510-512
    • /
    • 2023
  • In order to enhance a model's capability for detecting facial expressions, this research suggests a pipeline that makes use of the GradCAM component. The patching module and the pseudo-labeling module make up the pipeline. The patching component takes the original face image and divides it into four equal parts. These parts are then each input into a 2Dconvolutional layer to produce a feature vector. Each picture segment is assigned a weight token using GradCAM in the pseudo-labeling module, and this token is then merged with the feature vector using principal component analysis. A convolutional neural network based on transfer learning technique is then utilized to extract the deep features. This technique applied on a public dataset MMI and achieved a validation accuracy of 96.06% which is showing the effectiveness of our method.

The effect of CR-CO discrepancy on cephalometric measurements in Class III malocclusion patients (골격성 III급 부정교합자에서 중심위 변위가 두부 방사선 계측치에 미치는 영향)

  • Park, Yang-Soo;Kim, Jong-Chul;Hwang, Hyeon-Shik
    • The korean journal of orthodontics
    • /
    • v.26 no.3
    • /
    • pp.255-265
    • /
    • 1996
  • The purpose of this study was to investigate if there were a significant difference between cephalometric measurements of mandibular position derived from a centric occlusion tracing compared to those of a converted centric relation tracing in the Class III malocclusion. The sample consisted of 25 Class III malocclusion and 25 normal occlusion persons who had no orthodontic treatment. The records included an lateral cephalometrics in centric occlusion, centric relation and centric occlusion bite registration and diagnostic casts mounted on the SAM II articulator in CR. The amount of CR-CO discrepancy of condyle was recorded using a MPI(Mandibular Position Indicator, MPI $200^{(R)}$, Great Lakes Orthodontics, USA). The conversion of the CO cephalogram to CR using the MPI readings was performed on the Conversion work sheet. Measures of mandibular position were chosen for the purpose of this study. The comparison of the difference between CO and CR cephalometric measurements in the normal occlusion and Class III malocclusion group were studied. The results were as follows: 1. In the features of CR-CO discrepancy of the condyle, the condyle was displaced posterior and inferior when the teeth were in centric occlusion. The horizontal component(${\Delta}X$) in Class HI malocclusion group was greater than the vertical component(${\Delta}Z$) and also greater than the horizontal component(${\Delta}X$) in normal occlusion group. There was no statistically significant correlation between MPI measurements and the groups of normal occlusion and Class III malocclusion group. 2. In the comparison of the cephalometric measurements in each group, Normal occlusion group showed significant difference in measurements such as ANB, Facial angle, Facial convexity and ODI. Class HI malocclusion group showed significant difference in measurements such as ANB, Facial angle, Facial convexity, ODI, SNB, APDI, L1-FP and it had more significance than the normal occlusion group. 3. The Value of cephalometric measurements was significantly different between CO and CR but there were no differences between the groups of normal occlusion and Class III malocclusion. The results of this study suggest that if the discrepancies are greater than the amount of normal displacement from clinically captured centric relation, centric relation should be considered as the starting point for proper diagnosis and treatment planning.

  • PDF

Face Recognition using Modified Local Directional Pattern Image (Modified Local Directional Pattern 영상을 이용한 얼굴인식)

  • Kim, Dong-Ju;Lee, Sang-Heon;Sohn, Myoung-Kyu
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.3
    • /
    • pp.205-208
    • /
    • 2013
  • Generally, binary pattern transforms have been used in the field of the face recognition and facial expression, since they are robust to illumination. Thus, this paper proposes an illumination-robust face recognition system combining an MLDP, which improves the texture component of the LDP, and a 2D-PCA algorithm. Unlike that binary pattern transforms such as LBP and LDP were used to extract histogram features, the proposed method directly uses the MLDP image for feature extraction by 2D-PCA. The performance evaluation of proposed method was carried out using various algorithms such as PCA, 2D-PCA and Gabor wavelets-based LBP on Yale B and CMU-PIE databases which were constructed under varying lighting condition. From the experimental results, we confirmed that the proposed method showed the best recognition accuracy.

Face recognition using Wavelets and Fuzzy C-Means clustering (웨이블렛과 퍼지 C-Means 클러스터링을 이용한 얼굴 인식)

  • 윤창용;박정호;박민용
    • Proceedings of the IEEK Conference
    • /
    • 1999.06a
    • /
    • pp.583-586
    • /
    • 1999
  • In this paper, the wavelet transform is performed in the input 256$\times$256 color image and decomposes a image into low-pass and high-pass components. Since the high-pass band contains the components of three directions, edges are detected by combining three parts. After finding the position of face using the histogram of the edge component, a face region in low-pass band is cut off. Since RGB color image is sensitively affected by luminances, the image of low pass component is normalized, and a facial region is detected using face color informations. As the wavelet transform decomposes the detected face region into three layer, the dimension of input image is reduced. In this paper, we use the 3000 images of 10 persons, and KL transform is applied in order to classify face vectors effectively. FCM(Fuzzy C-Means) algorithm classifies face vectors with similar features into the same cluster. In this case, the number of cluster is equal to that of person, and the mean vector of each cluster is used as a codebook. We verify the system performance of the proposed algorithm by the experiments. The recognition rates of learning images and testing image is computed using correlation coefficient and Euclidean distance.

  • PDF

Face Recognition using 2D-PCA and Image Partition (2D - PCA와 영상분할을 이용한 얼굴인식)

  • Lee, Hyeon Gu;Kim, Dong Ju
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.8 no.2
    • /
    • pp.31-40
    • /
    • 2012
  • Face recognition refers to the process of identifying individuals based on their facial features. It has recently become one of the most popular research areas in the fields of computer vision, machine learning, and pattern recognition because it spans numerous consumer applications, such as access control, surveillance, security, credit-card verification, and criminal identification. However, illumination variation on face generally cause performance degradation of face recognition systems under practical environments. Thus, this paper proposes an novel face recognition system using a fusion approach based on local binary pattern and two-dimensional principal component analysis. To minimize illumination effects, the face image undergoes the local binary pattern operation, and the resultant image are divided into two sub-images. Then, two-dimensional principal component analysis algorithm is separately applied to each sub-images. The individual scores obtained from two sub-images are integrated using a weighted-summation rule, and the fused-score is utilized to classify the unknown user. The performance evaluation of the proposed system was performed using the Yale B database and CMU-PIE database, and the proposed method shows the better recognition results in comparison with existing face recognition techniques.

Real Time Face Detection and Recognition using Rectangular Feature based Classifier and Class Matching Algorithm (사각형 특징 기반 분류기와 클래스 매칭을 이용한 실시간 얼굴 검출 및 인식)

  • Kim, Jong-Min;Kang, Myung-A
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.1
    • /
    • pp.19-26
    • /
    • 2010
  • This paper proposes a classifier based on rectangular feature to detect face in real time. The goal is to realize a strong detection algorithm which satisfies both efficiency in calculation and detection performance. The proposed algorithm consists of the following three stages: Feature creation, classifier study and real time facial domain detection. Feature creation organizes a feature set with the proposed five rectangular features and calculates the feature values efficiently by using SAT (Summed-Area Tables). Classifier learning creates classifiers hierarchically by using the AdaBoost algorithm. In addition, it gets excellent detection performance by applying important face patterns repeatedly at the next level. Real time facial domain detection finds facial domains rapidly and efficiently through the classifier based on the rectangular feature that was created. Also, the recognition rate was improved by using the domain which detected a face domain as the input image and by using PCA and KNN algorithms and a Class to Class rather than the existing Point to Point technique.