• Title/Summary/Keyword: Facial images

Search Result 637, Processing Time 0.022 seconds

Face Recognition Using a Facial Recognition System

  • Almurayziq, Tariq S;Alazani, Abdullah
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.9
    • /
    • pp.280-286
    • /
    • 2022
  • Facial recognition system is a biometric manipulation. Its applicability is simpler, and its work range is broader than fingerprints, iris scans, signatures, etc. The system utilizes two technologies, such as face detection and recognition. This study aims to develop a facial recognition system to recognize person's faces. Facial recognition system can map facial characteristics from photos or videos and compare the information with a given facial database to find a match, which helps identify a face. The proposed system can assist in face recognition. The developed system records several images, processes recorded images, checks for any match in the database, and returns the result. The developed technology can recognize multiple faces in live recordings.

Glasses Removal from Facial Images with Recursive PCA Reconstruction (반복적인 PCA 재구성을 이용한 얼굴 영상에서의 안경 제거)

  • 오유화;안상철;김형곤;김익재;이성환
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.3
    • /
    • pp.35-49
    • /
    • 2004
  • This paper proposes a new glasses removal method from color frontal facial image to generate gray glassless facial image. The proposed method is based on recursive PCA reconstruction. For the generation of glassless images, the occluded region by glasses should be found, and a good reconstructed image to compensate with should be obtained. The recursive PCA reconstruction Provides us with both of them simultaneously, and finally produces glassless facial images. This paper shows the effectiveness of the proposed method by some experimental results. We believe that this method can be applied to removing other type of occlusion than the glasses with some modification and enhancing the performance of a face recognition system.

Robust Facial Expression Recognition using PCA Representation (PCA 표상을 이용한 강인한 얼굴 표정 인식)

  • Shin Young-Suk
    • Korean Journal of Cognitive Science
    • /
    • v.16 no.4
    • /
    • pp.323-331
    • /
    • 2005
  • This paper proposes an improved system for recognizing facial expressions in various internal states that is illumination-invariant and without detectable rue such as a neutral expression. As a preprocessing to extract the facial expression information, a whitening step was applied. The whitening step indicates that the mean of the images is set to zero and the variances are equalized as unit variances, which reduces murk of the variability due to lightening. After the whitening step, we used the facial expression information based on principal component analysis(PCA) representation excluded the first 1 principle component. Therefore, it is possible to extract the features in the lariat expression images without detectable cue of neutral expression from the experimental results, we ran also implement the various and natural facial expression recognition because we perform the facial expression recognition based on dimension model of internal states on the images selected randomly in the various facial expression images corresponding to 83 internal emotional states.

  • PDF

Region-Based Facial Expression Recognition in Still Images

  • Nagi, Gawed M.;Rahmat, Rahmita O.K.;Khalid, Fatimah;Taufik, Muhamad
    • Journal of Information Processing Systems
    • /
    • v.9 no.1
    • /
    • pp.173-188
    • /
    • 2013
  • In Facial Expression Recognition Systems (FERS), only particular regions of the face are utilized for discrimination. The areas of the eyes, eyebrows, nose, and mouth are the most important features in any FERS. Applying facial features descriptors such as the local binary pattern (LBP) on such areas results in an effective and efficient FERS. In this paper, we propose an automatic facial expression recognition system. Unlike other systems, it detects and extracts the informative and discriminant regions of the face (i.e., eyes, nose, and mouth areas) using Haar-feature based cascade classifiers and these region-based features are stored into separate image files as a preprocessing step. Then, LBP is applied to these image files for facial texture representation and a feature-vector per subject is obtained by concatenating the resulting LBP histograms of the decomposed region-based features. The one-vs.-rest SVM, which is a popular multi-classification method, is employed with the Radial Basis Function (RBF) for facial expression classification. Experimental results show that this approach yields good performance for both frontal and near-frontal facial images in terms of accuracy and time complexity. Cohn-Kanade and JAFFE, which are benchmark facial expression datasets, are used to evaluate this approach.

Extraction and Implementation of MPEG-4 Facial Animation Parameter for Web Application (웹 응용을 위한 MPEC-4 얼굴 애니메이션 파라미터 추출 및 구현)

  • 박경숙;허영남;김응곤
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.6 no.8
    • /
    • pp.1310-1318
    • /
    • 2002
  • In this study, we developed a 3D facial modeler and animator that will not use the existing method by 3D scanner or camera. Without expensive image-input equipments, we can easily create 3D models only using front and side images. The system is available to animate 3D facial models as we connect to animation server on the WWW which is independent from specific platforms and softwares. It was implemented using Java 3D API. The facial modeler detects MPEG-4 FDP(Facial Definition Parameter) feature points from 2D input images, creates 3D facial model modifying generic facial model with the points. The animator animates and renders the 3D facial model according to MPEG-4 FAP(Facial Animation Parameter). This system can be used for generating an avatar on WWW.

Face Hallucination based on Example-Learning (예제학습 방법에 기반한 저해상도 얼굴 영상 복원)

  • Lee, Jun-Tae;Kim, Jae-Hyup;Moon, Young-Shik
    • Proceedings of the KIEE Conference
    • /
    • 2008.10b
    • /
    • pp.292-293
    • /
    • 2008
  • In this paper, we propose a face hallucination method based on example-learning. The traditional approach based on example-learning requires alignment of face images. In the proposed method, facial images are segmented into patches and the weights are computed to represent input low resolution facial images into weighted sum of low resolution example images. High resolution facial images are hallucinated by combining the weight vectors with the corresponding high resolution patches in the training set. Experimental results show that the proposed method produces more reliable results of face hallucination than the ones by the traditional approach based on example-learning.

  • PDF

Comparison of 64 Channel 3 Dimensional Volume CT with Conventional 3D CT in the Diagnosis and Treatment of Facial Bone Fractures (얼굴뼈 골절의 진단과 치료에 64채널 3D VCT와 Conventional 3D CT의 비교)

  • Jung, Jong Myung;Kim, Jong Whan;Hong, In Pyo;Choi, Chi Hoon
    • Archives of Plastic Surgery
    • /
    • v.34 no.5
    • /
    • pp.605-610
    • /
    • 2007
  • Purpose: Facial trauma is increasing along with increasing popularity in sports, and increasing exposure to crimes or traffic accidents. Compared to the 3D CT of 1990s, the latest CT has made significant improvement thus resulting in higher accuracy of diagnosis. The objective of this study is to compare 64 channel 3 dimensional volume CT(3D VCT) with conventional 3D CT in the diagnosis and treatment of facial bone fractures. Methods: 45 patients with facial trauma were examined by 3D VCT from Jan. 2006 to Feb. 2007. 64 channel 3D VCT which consists of 64 detectors produce axial images of 0.625 mm slice and it scans 175 mm per second. These images are transformed into 3 dimensional image using software Rapidia 2.8. The axial image is reconstructed into 3 dimensional image by volume rendering method. The image is also reconstructed into coronal or sagittal image by multiplanar reformatting method. Results: Contrasting to the previous 3D CT which formulates 3D images by taking axial images of 1-2 mm, 64 channel 3D VCT takes 0.625 mm thin axial images to obtain full images without definite step ladder appearance. 64 channel 3D VCT is effective in diagnosis of thin linear bone fracture, depth and degree of fracture deviation. Conclusion: In its expense and speed, 3D VCT is superior to conventional 3D CT. Owing to its ability to reconstruct full images regardless of the direction using 2 times higher resolution power and 4 times higher speed of the previous 3D CT, 3D VCT allows for accurate evaluation of the exact site and deviation of fine fractures.

Sasang Constitution Classification using Convolutional Neural Network on Facial Images (콘볼루션 신경망 기반의 안면영상을 이용한 사상체질 분류)

  • Ahn, Ilkoo;Kim, Sang-Hyuk;Jeong, Kyoungsik;Kim, Hoseok;Lee, Siwoo
    • Journal of Sasang Constitutional Medicine
    • /
    • v.34 no.3
    • /
    • pp.31-40
    • /
    • 2022
  • Objectives Sasang constitutional medicine is a traditional Korean medicine that classifies humans into four constitutions in consideration of individual differences in physical, psychological, and physiological characteristics. In this paper, we proposed a method to classify Taeeum person (TE) and Non-Taeeum person (NTE), Soeum person (SE) and Non-Soeum person (NSE), and Soyang person (ST) and Non-Soyang person (NSY) using a convolutional neural network with only facial images. Methods Based on the convolutional neural network VGG16 architecture, transfer learning is carried out on the facial images of 3738 subjects to classify TE and NTE, SE and NSE, and SY and NSY. Data augmentation techniques are used to increase classification performance. Results The classification performance of TE and NTE, SE and NSE, and SY and NSY was 77.24%, 85.17%, and 80.18% by F1 score and 80.02%, 85.96%, and 72.76% by Precision-Recall AUC (Area Under the receiver operating characteristic Curve) respectively. Conclusions It was found that Soeum person had the most heterogeneous facial features as it had the best classification performance compared to the rest of the constitution, followed by Taeeum person and Soyang person. The experimental results showed that there is a possibility to classify constitutions only with facial images. The performance is expected to increase with additional data such as BMI or personality questionnaire.

Sasang Constitution Detection Based on Facial Feature Analysis Using Explainable Artificial Intelligence (설명가능한 인공지능을 활용한 안면 특징 분석 기반 사상체질 검출)

  • Jeongkyun Kim;Ilkoo Ahn;Siwoo Lee
    • Journal of Sasang Constitutional Medicine
    • /
    • v.36 no.2
    • /
    • pp.39-48
    • /
    • 2024
  • Objectives The aim was to develop a method for detecting Sasang constitution based on the ratio of facial landmarks and provide an objective and reliable tool for Sasang constitution classification. Methods Facial images, KS-15 scores, and certainty scores were collected from subjects identified by Korean Medicine Data Center. Facial ratio landmarks were detected, yielding 2279 facial ratio features. Tree-based models were trained to classify Sasang constitution, and Shapley Additive Explanations (SHAP) analysis was employed to identify important facial features. Additionally, Body Mass Index (BMI) and personality questionnaire were incorporated as supplementary information to enhance model performance. Results Using the Tree-based models, the accuracy for classifying Taeeum, Soeum, and Soyang constitutions was 81.90%, 90.49%, and 81.90% respectively. SHAP analysis revealed important facial features, while the inclusion of BMI and personality questionnaire improved model performance. This demonstrates that facial ratio-based Sasang constitution analysis yields effective and accurate classification results. Conclusions Facial ratio-based Sasang constitution analysis provides rapid and objective results compared to traditional methods. This approach holds promise for enhancing personalized medicine in Korean traditional medicine.

Facial Regions Detection Using the Color and Shape Information in Color Still Images (컬러 정지 영상에서 색상과 모양 정보를 이용한 얼굴 영역 검출)

  • 김영길;한재혁;안재형
    • Journal of Korea Multimedia Society
    • /
    • v.4 no.1
    • /
    • pp.67-74
    • /
    • 2001
  • In this paper, we propose a face detection algorithm using the color and shape information in color still images. The proposed algorithm is only applied to chrominance components(Cb and Cr) in order to reduce the variations of lighting condition in YCbCr color space. Input image is segmented by pixels with skin-tone color and then the segmented mage follows the morphological filtering an geometric correction to eliminate noise and simplify the segmented regions in facial candidate regions. Multiple facial regions in input images can be isolated by connected component labeling. Moreover tilting facial regions can be detected by extraction of second moment-based ellipse features.

  • PDF