• Title/Summary/Keyword: facial features

Search Result 633, Processing Time 0.031 seconds

A case of Noonan syndrome diagnosed using the facial recognition software (FACE2GENE)

  • Kim, Soo Kyoung;Jung, So Yoon;Bae, Seong Phil;Kim, Jieun;Lee, Jeongho;Lee, Dong Hwan
    • Journal of Genetic Medicine
    • /
    • v.16 no.2
    • /
    • pp.81-84
    • /
    • 2019
  • Clinicians often have difficulties diagnosing patients with subtle phenotypes of Noonan syndrome phenotypes. Facial recognition technology can help in the identification of several genetic syndromes with facial dysmorphic features, especially those with mild or atypical phenotypes. A patient visited our clinic at 5 years of age with short stature. She was administered growth hormone treatment for 6 years, but her growth curve was still below the 3rd percentile. She and her mother had wide-spaced eyes and short stature, but there were no other remarkable features of a genetic syndrome. We analyzed their photographs using a smartphone facial recognition application. The results suggested Noonan syndrome; therefore, we performed targeted next-generation sequencing of genes associated with short stature. The results showed that they had a mutation on the PTPN11 gene known as the pathogenic mutation of Noonan syndrome. Facial recognition technology can help in the diagnosis of Noonan syndrome and other genetic syndromes, especially in patients with mild phenotypes.

A flexible Feature Matching for Automatic Face and Facial Feature Points Detection (얼굴과 얼굴 특징점 자동 검출을 위한 탄력적 특징 정합)

  • 박호식;배철수
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.7 no.4
    • /
    • pp.705-711
    • /
    • 2003
  • An automatic face and facial feature points(FFPs) detection system is proposed. A face is represented as a graph where the nodes are placed at facial feature points(FFPs) labeled by their Gabor features and the edges are describes their spatial relations. An innovative flexible feature matching is proposed to perform features correspondence between models and the input image. This matching model works likes random diffusion process in !be image space by employing the locally competitive and globally corporative mechanism. The system works nicely on the face images under complicated background, pose variations and distorted by facial accessories. We demonstrate the benefits of our approach by its implementation on the face identification system.

Monosyllable Speech Recognition through Facial Movement Analysis (안면 움직임 분석을 통한 단음절 음성인식)

  • Kang, Dong-Won;Seo, Jeong-Woo;Choi, Jin-Seung;Choi, Jae-Bong;Tack, Gye-Rae
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.63 no.6
    • /
    • pp.813-819
    • /
    • 2014
  • The purpose of this study was to extract accurate parameters of facial movement features using 3-D motion capture system in speech recognition technology through lip-reading. Instead of using the features obtained through traditional camera image, the 3-D motion system was used to obtain quantitative data for actual facial movements, and to analyze 11 variables that exhibit particular patterns such as nose, lip, jaw and cheek movements in monosyllable vocalizations. Fourteen subjects, all in 20s of age, were asked to vocalize 11 types of Korean vowel monosyllables for three times with 36 reflective markers on their faces. The obtained facial movement data were then calculated into 11 parameters and presented as patterns for each monosyllable vocalization. The parameter patterns were performed through learning and recognizing process for each monosyllable with speech recognition algorithms with Hidden Markov Model (HMM) and Viterbi algorithm. The accuracy rate of 11 monosyllables recognition was 97.2%, which suggests the possibility of voice recognition of Korean language through quantitative facial movement analysis.

Automatic Face Extraction with Unification of Brightness Distribution in Candidate Region and Triangle Structure among Facial Features (후보영역의 밝기 분산과 얼굴특징의 삼각형 배치구조를 결합한 얼굴의 자동 검출)

  • 이칠우;최정주
    • Journal of Korea Multimedia Society
    • /
    • v.3 no.1
    • /
    • pp.23-33
    • /
    • 2000
  • In this paper, we describe an algorithm which can extract human faces with natural pose from complex backgrounds. This method basically adopts the concept that facial region has the nearly same gray level for all pixels within appropriately scaled blocks. Based on the idea, we develop a hierarchial process that first, a block image data with pyramid structure of input image is generated, and some candidate regions for facial regions in the block image are Quickly determined, then finally the detailed facial features; organs are decided. To find the features easily, we introduce a local gray level transform which emphasizes dark and small regions, and estimate the geometrical triangle constraints among the facial features. The merit of our method is that we can be freed from the parameter assignment problem since the algorithm utilize a simple brightness computation, consequently robust systems not being depended on specific parameter values can be easily constructed.

  • PDF

Face Detection Using Multi-level Features for Privacy Protection in Large-scale Surveillance Video (대규모 비디오 감시 환경에서 프라이버시 보호를 위한 다중 레벨 특징 기반 얼굴검출 방법에 관한 연구)

  • Lee, Seung Ho;Moon, Jung Ik;Kim, Hyung-Il;Ro, Yong Man
    • Journal of Korea Multimedia Society
    • /
    • v.18 no.11
    • /
    • pp.1268-1280
    • /
    • 2015
  • In video surveillance system, the exposure of a person's face is a serious threat to personal privacy. To protect the personal privacy in large amount of videos, an automatic face detection method is required to locate and mask the person's face. However, in real-world surveillance videos, the effectiveness of existing face detection methods could deteriorate due to large variations in facial appearance (e.g., facial pose, illumination etc.) or degraded face (e.g., occluded face, low-resolution face etc.). This paper proposes a new face detection method based on multi-level facial features. In a video frame, different kinds of spatial features are independently extracted, and analyzed, which could complement each other in the aforementioned challenges. Temporal domain analysis is also exploited to consolidate the proposed method. Experimental results show that, compared to competing methods, the proposed method is able to achieve very high recall rates while maintaining acceptable precision rates.

Synthesis of Expressive Talking Heads from Speech with Recurrent Neural Network (RNN을 이용한 Expressive Talking Head from Speech의 합성)

  • Sakurai, Ryuhei;Shimba, Taiki;Yamazoe, Hirotake;Lee, Joo-Ho
    • The Journal of Korea Robotics Society
    • /
    • v.13 no.1
    • /
    • pp.16-25
    • /
    • 2018
  • The talking head (TH) indicates an utterance face animation generated based on text and voice input. In this paper, we propose the generation method of TH with facial expression and intonation by speech input only. The problem of generating TH from speech can be regarded as a regression problem from the acoustic feature sequence to the facial code sequence which is a low dimensional vector representation that can efficiently encode and decode a face image. This regression was modeled by bidirectional RNN and trained by using SAVEE database of the front utterance face animation database as training data. The proposed method is able to generate TH with facial expression and intonation TH by using acoustic features such as MFCC, dynamic elements of MFCC, energy, and F0. According to the experiments, the configuration of the BLSTM layer of the first and second layers of bidirectional RNN was able to predict the face code best. For the evaluation, a questionnaire survey was conducted for 62 persons who watched TH animations, generated by the proposed method and the previous method. As a result, 77% of the respondents answered that the proposed method generated TH, which matches well with the speech.

Comparison of Computer and Human Face Recognition According to Facial Components

  • Nam, Hyun-Ha;Kang, Byung-Jun;Park, Kang-Ryoung
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.1
    • /
    • pp.40-50
    • /
    • 2012
  • Face recognition is a biometric technology used to identify individuals based on facial feature information. Previous studies of face recognition used features including the eye, mouth and nose; however, there have been few studies on the effects of using other facial components, such as the eyebrows and chin, on recognition performance. We measured the recognition accuracy affected by these facial components, and compared the differences between computer-based and human-based facial recognition methods. This research is novel in the following four ways compared to previous works. First, we measured the effect of components such as the eyebrows and chin. And the accuracy of computer-based face recognition was compared to human-based face recognition according to facial components. Second, for computer-based recognition, facial components were automatically detected using the Adaboost algorithm and active appearance model (AAM), and user authentication was achieved with the face recognition algorithm based on principal component analysis (PCA). Third, we experimentally proved that the number of facial features (when including eyebrows, eye, nose, mouth, and chin) had a greater impact on the accuracy of human-based face recognition, but consistent inclusion of some feature such as chin area had more influence on the accuracy of computer-based face recognition because a computer uses the pixel values of facial images in classifying faces. Fourth, we experimentally proved that the eyebrow feature enhanced the accuracy of computer-based face recognition. However, the problem of occlusion by hair should be solved in order to use the eyebrow feature for face recognition.

Photogrammetric Analysis of Attractiveness in Indian Faces

  • Duggal, Shveta;Kapoor, DN;Verma, Santosh;Sagar, Mahesh;Lee, Yung-Seop;Moon, Hyoungjin;Rhee, Seung Chul
    • Archives of Plastic Surgery
    • /
    • v.43 no.2
    • /
    • pp.160-171
    • /
    • 2016
  • Background The objective of this study was to assess the attractive facial features of the Indian population. We tried to evaluate subjective ratings of facial attractiveness and identify which facial aesthetic subunits were important for facial attractiveness. Methods A cross-sectional study was conducted of 150 samples (referred to as candidates). Frontal photographs were analyzed. An orthodontist, a prosthodontist, an oral surgeon, a dentist, an artist, a photographer and two laymen (estimators) subjectively evaluated candidates' faces using visual analog scale (VAS) scores. As an objective method for facial analysis, we used balanced angular proportional analysis (BAPA). Using SAS 10.1 (SAS Institute Inc.), the Turkey's studentized range test and Pearson correlation analysis were performed to detect between-group differences in VAS scores (Experiment 1), to identify correlations between VAS scores and BAPA scores (Experiment 2), and to analyze the characteristic features of facial attractiveness and gender differences (Experiment 3); the significance level was set at P=0.05. Results Experiment 1 revealed some differences in VAS scores according to professional characteristics. In Experiment 2, BAPA scores were found to behave similarly to subjective ratings of facial beauty, but showed a relatively weak correlation coefficient with the VAS scores. Experiment 3 found that the decisive factors for facial attractiveness were different for men and women. Composite images of attractive Indian male and female faces were constructed. Conclusions Our photogrammetric study, statistical analysis, and average composite faces of an Indian population provide valuable information about subjective perceptions of facial beauty and attractive facial structures in the Indian population.

Face and Iris Detection Algorithm based on SURF and circular Hough Transform (서프 및 하프변환 기반 운전자 동공 검출기법)

  • Artem, Lenskiy;Lee, Jong-Soo
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.5
    • /
    • pp.175-182
    • /
    • 2010
  • The paper presents a novel algorithm for face and iris detection with the application for driver iris monitoring. The proposed algorithm consists of the following major steps: Skin-color segmentation, facial features segmentation, and iris positioning. For the skin-segmentation we applied a multi-layer perceptron to approximate the statistical probability of certain skin-colors, and filter out those with low probabilities. The next step segments the face region into the following categories: eye, mouth, eye brow, and remaining facial regions. For this purpose we propose a novel segmentation technique based on estimation of facial class probability density functions (PDF). Each facial class PDF is estimated on the basis of salient features extracted from a corresponding facial image region. Then pixels are classified according to the highest probability selected from four estimated PDFs. The final step applies the circular Hough transform to the detected eye regions to extract the position and radius of the iris. We tested our system on two data sets. The first one is obtained from the Web and contains faces under different illuminations. The second dataset was collected by us. It contains images obtained from video sequences recorded by a CCD camera while a driver was driving a car. The experimental results are presented, showing high detection rates.

Multimodal Biometrics Recognition from Facial Video with Missing Modalities Using Deep Learning

  • Maity, Sayan;Abdel-Mottaleb, Mohamed;Asfour, Shihab S.
    • Journal of Information Processing Systems
    • /
    • v.16 no.1
    • /
    • pp.6-29
    • /
    • 2020
  • Biometrics identification using multiple modalities has attracted the attention of many researchers as it produces more robust and trustworthy results than single modality biometrics. In this paper, we present a novel multimodal recognition system that trains a deep learning network to automatically learn features after extracting multiple biometric modalities from a single data source, i.e., facial video clips. Utilizing different modalities, i.e., left ear, left profile face, frontal face, right profile face, and right ear, present in the facial video clips, we train supervised denoising auto-encoders to automatically extract robust and non-redundant features. The automatically learned features are then used to train modality specific sparse classifiers to perform the multimodal recognition. Moreover, the proposed technique has proven robust when some of the above modalities were missing during the testing. The proposed system has three main components that are responsible for detection, which consists of modality specific detectors to automatically detect images of different modalities present in facial video clips; feature selection, which uses supervised denoising sparse auto-encoders network to capture discriminative representations that are robust to the illumination and pose variations; and classification, which consists of a set of modality specific sparse representation classifiers for unimodal recognition, followed by score level fusion of the recognition results of the available modalities. Experiments conducted on the constrained facial video dataset (WVU) and the unconstrained facial video dataset (HONDA/UCSD), resulted in a 99.17% and 97.14% Rank-1 recognition rates, respectively. The multimodal recognition accuracy demonstrates the superiority and robustness of the proposed approach irrespective of the illumination, non-planar movement, and pose variations present in the video clips even in the situation of missing modalities.