• Title/Summary/Keyword: Facial image

Search Result 828, Processing Time 0.043 seconds

Facial Expression Recognition with Instance-based Learning Based on Regional-Variation Characteristics Using Models-based Feature Extraction (모델기반 특징추출을 이용한 지역변화 특성에 따른 개체기반 표정인식)

  • Park, Mi-Ae;Ko, Jae-Pil
    • Journal of Korea Multimedia Society
    • /
    • v.9 no.11
    • /
    • pp.1465-1473
    • /
    • 2006
  • In this paper, we present an approach for facial expression recognition using Active Shape Models(ASM) and a state-based model in image sequences. Given an image frame, we use ASM to obtain the shape parameter vector of the model while we locate facial feature points. Then, we can obtain the shape parameter vector set for all the frames of an image sequence. This vector set is converted into a state vector which is one of the three states by the state-based model. In the classification step, we use the k-NN with the proposed similarity measure that is motivated on the observation that the variation-regions of an expression sequence are different from those of other expression sequences. In the experiment with the public database KCFD, we demonstrate that the proposed measure slightly outperforms the binary measure in which the recognition performance of the k-NN with the proposed measure and the existing binary measure show 89.1% and 86.2% respectively when k is 1.

  • PDF

Multi-attribute Face Editing using Facial Masks (얼굴 마스크 정보를 활용한 다중 속성 얼굴 편집)

  • Ambardi, Laudwika;Park, In Kyu;Hong, Sungeun
    • Journal of Broadcast Engineering
    • /
    • v.27 no.5
    • /
    • pp.619-628
    • /
    • 2022
  • Although face recognition and face generation have been growing in popularity, the privacy issues of using facial images in the wild have been a concurrent topic. In this paper, we propose a face editing network that can reduce privacy issues by generating face images with various properties from a small number of real face images and facial mask information. Unlike the existing methods of learning face attributes using a lot of real face images, the proposed method generates new facial images using a facial segmentation mask and texture images from five parts as styles. The images are then trained with our network to learn the styles and locations of each reference image. Once the proposed framework is trained, we can generate various face images using only a small number of real face images and segmentation information. In our extensive experiments, we show that the proposed method can not only generate new faces, but also localize facial attribute editing, despite using very few real face images.

A study on age estimation of facial images using various CNNs (Convolutional Neural Networks) (다양한 CNN 모델을 이용한 얼굴 영상의 나이 인식 연구)

  • Sung Eun Choi
    • Journal of Platform Technology
    • /
    • v.11 no.5
    • /
    • pp.16-22
    • /
    • 2023
  • There is a growing interest in facial age estimation because many applications require age estimation techniques from facial images. In order to estimate the exact age of a face, a technique for extracting aging features from a face image and classifying the age according to the extracted features is required. Recently, the performance of various CNN-based deep learning models has been greatly improved in the image recognition field, and various CNN-based deep learning models are being used to improve performance in the field of facial age estimation. In this paper, age estimation performance was compared by learning facial features based on various CNN-based models such as AlexNet, VGG-16, VGG-19, ResNet-18, ResNet-34, ResNet-50, ResNet-101, ResNet-152. As a result of experiment, it was confirmed that the performance of the facial age estimation models using ResNet-34 was the best.

  • PDF

A study on age distortion reduction in facial expression image generation using StyleGAN Encoder (StyleGAN Encoder를 활용한 표정 이미지 생성에서의 연령 왜곡 감소에 대한 연구)

  • Hee-Yeol Lee;Seung-Ho Lee
    • Journal of IKEEE
    • /
    • v.27 no.4
    • /
    • pp.464-471
    • /
    • 2023
  • In this paper, we propose a method to reduce age distortion in facial expression image generation using StyleGAN Encoder. The facial expression image generation process first creates a face image using StyleGAN Encoder, and changes the expression by applying the learned boundary to the latent vector using SVM. However, when learning the boundary of a smiling expression, age distortion occurs due to changes in facial expression. The smile boundary created in SVM learning for smiling expressions includes wrinkles caused by changes in facial expressions as learning elements, and it is determined that age characteristics were also learned. To solve this problem, the proposed method calculates the correlation coefficient between the smile boundary and the age boundary and uses this to introduce a method of adjusting the age boundary at the smile boundary in proportion to the correlation coefficient. To confirm the effectiveness of the proposed method, the results of an experiment using the FFHQ dataset, a publicly available standard face dataset, and measuring the FID score are as follows. In the smile image, compared to the existing method, the FID score of the smile image generated by the ground truth and the proposed method was improved by about 0.46. In addition, compared to the existing method in the smile image, the FID score of the image generated by StyleGAN Encoder and the smile image generated by the proposed method improved by about 1.031. In non-smile images, compared to the existing method, the FID score of the non-smile image generated by the ground truth and the method proposed in this paper was improved by about 2.25. In addition, compared to the existing method in non-smile images, it was confirmed that the FID score of the image generated by StyleGAN Encoder and the non-smile image generated by the proposed method improved by about 1.908. Meanwhile, as a result of estimating the age of each generated facial expression image and measuring the estimated age and MSE of the image generated with StyleGAN Encoder, compared to the existing method, the proposed method has an average age of about 1.5 in smile images and about 1.63 in non-smile images. Performance was improved, proving the effectiveness of the proposed method.

A Study on the Face Image to Shape Differences and Make up (얼굴의 형태적 특성과 메이크업에 의한 얼굴 이미지 연구)

  • Song, Mi-Young;Park, Oak-Reon;Lee, Young-Ju
    • Korean Journal of Human Ecology
    • /
    • v.14 no.1
    • /
    • pp.143-153
    • /
    • 2005
  • The purpose of this research is to study face images according to the difference of facial shape and make-up. A variety of face images can be formulated by computer graphic simulation, combining numerously different facial shapes and make-up styles. In order to check out the diverse images by make-up styles, we applied five forms of eye brows, two types of eye shadows, and three lip shapes to the round-shaped face of a model. The question sheet, used with a operational stimulant in the experiment, contained 28 articles, composed of a pair of bi-ended adjective in 7 point scale. Data were analyzed using Varimax perpendicular rotation method, Duncan's Multiple Range Test, and Three-way ANOVA. After comparing various results of make-up application to various face types, we could find that facial shape, eye-brows, eye-shadow, and lip shapes influence interactively on total facial images. As a result of make-up image perception analyses, a factor structure was divided into mildness, modernness, elegance, and sociableness. Speaking of make-up image in terms of those factors, round form make-up style showed the highest level of mildness. Upward and straight style of make-up had the highest of modernness. Elegance level went highest when eye shadow style was round form and lip style was straight. Lastly, an incurve lip make-up style showed the highest of sociableness.

  • PDF

Face Detection and Recognition with Multiple Appearance Models for Mobile Robot Application

  • Lee, Taigun;Park, Sung-Kee;Kim, Munsang
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2002.10a
    • /
    • pp.100.4-100
    • /
    • 2002
  • For visual navigation, mobile robot can use a stereo camera which has large field of view. In this paper, we propose an algorithm to detect and recognize human face on the basis of such camera system. In this paper, a new coarse to fine detection algorithm is proposed. For coarse detection, nearly face-like areas are found in entire image using dual ellipse templates. And, detailed alignment of facial outline and features is performed on the basis of view- based multiple appearance model. Because it hard to finely align with facial features in this case, we try to find most resembled face image area is selected from multiple face appearances using most distinguished facial features- two eye...

  • PDF

Homogeneous and Non-homogeneous Polynomial Based Eigenspaces to Extract the Features on Facial Images

  • Muntasa, Arif
    • Journal of Information Processing Systems
    • /
    • v.12 no.4
    • /
    • pp.591-611
    • /
    • 2016
  • High dimensional space is the biggest problem when classification process is carried out, because it takes longer time for computation, so that the costs involved are also expensive. In this research, the facial space generated from homogeneous and non-homogeneous polynomial was proposed to extract the facial image features. The homogeneous and non-homogeneous polynomial-based eigenspaces are the second opinion of the feature extraction of an appearance method to solve non-linear features. The kernel trick has been used to complete the matrix computation on the homogeneous and non-homogeneous polynomial. The weight and projection of the new feature space of the proposed method have been evaluated by using the three face image databases, i.e., the YALE, the ORL, and the UoB. The experimental results have produced the highest recognition rate 94.44%, 97.5%, and 94% for the YALE, ORL, and UoB, respectively. The results explain that the proposed method has produced the higher recognition than the other methods, such as the Eigenface, Fisherface, Laplacianfaces, and O-Laplacianfaces.

Skin Condition Analysis of Facial Image using Smart Device: Based on Acne, Pigmentation, Flush and Blemish

  • Park, Ki-Hong;Kim, Yoon-Ho
    • Journal of Advanced Information Technology and Convergence
    • /
    • v.8 no.2
    • /
    • pp.47-58
    • /
    • 2018
  • In this paper, we propose a method for skin condition analysis using a camera module embedded in a smartphone without a separate skin diagnosis device. The type of skin disease detected in facial image taken by smartphone is acne, pigmentation, blemish and flush. Face features and regions were detected using Haar features, and skin regions were detected using YCbCr and HSV color models. Acne and flush were extracted by setting the range of a component image hue, and pigmentation was calculated by calculating the factor between the minimum and maximum value of the corresponding skin pixel in the component image R. Blemish was detected on the basis of adaptive thresholds in gray scale level images. As a result of the experiment, the proposed skin condition analysis showed that skin diseases of acne, pigmentation, blemish and flush were effectively detected.

VALIDITY OF SUPERIMPOSITION RANGE AT 3-DIMENSIONAL FACIAL IMAGES (안면 입체영상 중첩시 중첩 기준 범위 설정에 따른 적합도 차이)

  • Choi, Hak-Hee;Cho, Jin-Hyoung;Park, Hong-Ju;Oh, Hee-Kyun;Choi, Jin-Hugh;Hwang, Hyeon-Shik;Lee, Ki-Heon
    • Maxillofacial Plastic and Reconstructive Surgery
    • /
    • v.31 no.2
    • /
    • pp.149-157
    • /
    • 2009
  • Purpose: This study was to evaluate the validity of superimposition range at facial images constructed with 3-dimensional (3D) surface laser scanning system. Materials and methods: For the present study, thirty adults, who had no severe skeletal discrepancy, were selected and scanned twice by a 3D laser scanner (VIVID 910, Minolta, Tokyo, Japan) with 12 markers placed on the face. Then, two 3D facial images (T1-baseline, T2-30 minutes later) were reconstructed respectably and superimposed in several manners with $RapidForm^{TM}2006$ (Inus, Seoul, Korea) software program. The distances between markers at the same place of face were measured in superimposed 3D facial images and measurement were done all the 12 makers respectably. Results: The average linear distances between the markers at the same place in the superimposed image constructed by upper 2/3 of the face was $0.92{\pm}0.23\;mm$, in the superimposed image constructed by upper 1/2 of the face was $0.98{\pm}0.26\;mm$, in the superimposed image constructed by upper 1/3 of the face and nose area was $0.99{\pm}0.24\;mm$, in the superimposed image constructed by upper 1/3 of the face was $1.41{\pm}0.48\;mm$, and in the superimposed image constructed by whole face was $0.83{\pm}0.13\;mm$. There were no statistically significant differences in the liner distances of the makers placed on the area included in superimposition range used for partial registration methods but there were significant differences in the linear distances of the markers placed on the areas not included in superimposition range between whole registration method and partial registration methods used in this study. Conclusion: The results of the present study suggest that the validity of superimposition is decreased as superimposition range is reduced in the superimposition of 3D images constructed with 3D laser scanner for the same subject.

Emotion Recognition using Facial Thermal Images

  • Eom, Jin-Sup;Sohn, Jin-Hun
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.3
    • /
    • pp.427-435
    • /
    • 2012
  • The aim of this study is to investigate facial temperature changes induced by facial expression and emotional state in order to recognize a persons emotion using facial thermal images. Background: Facial thermal images have two advantages compared to visual images. Firstly, facial temperature measured by thermal camera does not depend on skin color, darkness, and lighting condition. Secondly, facial thermal images are changed not only by facial expression but also emotional state. To our knowledge, there is no study to concurrently investigate these two sources of facial temperature changes. Method: 231 students participated in the experiment. Four kinds of stimuli inducing anger, fear, boredom, and neutral were presented to participants and the facial temperatures were measured by an infrared camera. Each stimulus consisted of baseline and emotion period. Baseline period lasted during 1min and emotion period 1~3min. In the data analysis, the temperature differences between the baseline and emotion state were analyzed. Eyes, mouth, and glabella were selected for facial expression features, and forehead, nose, cheeks were selected for emotional state features. Results: The temperatures of eyes, mouth, glanella, forehead, and nose area were significantly decreased during the emotional experience and the changes were significantly different by the kind of emotion. The result of linear discriminant analysis for emotion recognition showed that the correct classification percentage in four emotions was 62.7% when using both facial expression features and emotional state features. The accuracy was slightly but significantly decreased at 56.7% when using only facial expression features, and the accuracy was 40.2% when using only emotional state features. Conclusion: Facial expression features are essential in emotion recognition, but emotion state features are also important to classify the emotion. Application: The results of this study can be applied to human-computer interaction system in the work places or the automobiles.