• Title/Summary/Keyword: facial images

Search Result 637, Processing Time 0.025 seconds

Facial Point Classifier using Convolution Neural Network and Cascade Facial Point Detector (컨볼루셔널 신경망과 케스케이드 안면 특징점 검출기를 이용한 얼굴의 특징점 분류)

  • Yu, Je-Hun;Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.3
    • /
    • pp.241-246
    • /
    • 2016
  • Nowadays many people have an interest in facial expression and the behavior of people. These are human-robot interaction (HRI) researchers utilize digital image processing, pattern recognition and machine learning for their studies. Facial feature point detector algorithms are very important for face recognition, gaze tracking, expression, and emotion recognition. In this paper, a cascade facial feature point detector is used for finding facial feature points such as the eyes, nose and mouth. However, the detector has difficulty extracting the feature points from several images, because images have different conditions such as size, color, brightness, etc. Therefore, in this paper, we propose an algorithm using a modified cascade facial feature point detector using a convolutional neural network. The structure of the convolution neural network is based on LeNet-5 of Yann LeCun. For input data of the convolutional neural network, outputs from a cascade facial feature point detector that have color and gray images were used. The images were resized to $32{\times}32$. In addition, the gray images were made into the YUV format. The gray and color images are the basis for the convolution neural network. Then, we classified about 1,200 testing images that show subjects. This research found that the proposed method is more accurate than a cascade facial feature point detector, because the algorithm provides modified results from the cascade facial feature point detector.

Emotion Recognition using Facial Thermal Images

  • Eom, Jin-Sup;Sohn, Jin-Hun
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.3
    • /
    • pp.427-435
    • /
    • 2012
  • The aim of this study is to investigate facial temperature changes induced by facial expression and emotional state in order to recognize a persons emotion using facial thermal images. Background: Facial thermal images have two advantages compared to visual images. Firstly, facial temperature measured by thermal camera does not depend on skin color, darkness, and lighting condition. Secondly, facial thermal images are changed not only by facial expression but also emotional state. To our knowledge, there is no study to concurrently investigate these two sources of facial temperature changes. Method: 231 students participated in the experiment. Four kinds of stimuli inducing anger, fear, boredom, and neutral were presented to participants and the facial temperatures were measured by an infrared camera. Each stimulus consisted of baseline and emotion period. Baseline period lasted during 1min and emotion period 1~3min. In the data analysis, the temperature differences between the baseline and emotion state were analyzed. Eyes, mouth, and glabella were selected for facial expression features, and forehead, nose, cheeks were selected for emotional state features. Results: The temperatures of eyes, mouth, glanella, forehead, and nose area were significantly decreased during the emotional experience and the changes were significantly different by the kind of emotion. The result of linear discriminant analysis for emotion recognition showed that the correct classification percentage in four emotions was 62.7% when using both facial expression features and emotional state features. The accuracy was slightly but significantly decreased at 56.7% when using only facial expression features, and the accuracy was 40.2% when using only emotional state features. Conclusion: Facial expression features are essential in emotion recognition, but emotion state features are also important to classify the emotion. Application: The results of this study can be applied to human-computer interaction system in the work places or the automobiles.

Validity of Three-dimensional Facial Scan Taken with Facial Scanner and Digital Photo Wrapping on the Cone-beam Computed Tomography: Comparison of Soft Tissue Parameters

  • Aljawad, Hussein;Lee, Kyungmin Clara
    • Journal of Korean Dental Science
    • /
    • v.15 no.1
    • /
    • pp.19-30
    • /
    • 2022
  • Purpose: The purpose of the study was to assess the validity of three-dimensional (3D) facial scan taken with facial scanner and digital photo wrapping on the cone-beam computed tomography (CBCT). Materials and Methods: Twenty-five patients had their CBCT scan, two-dimensional (2D) standardized frontal photographs and 3D facial scan obtained on the same day. The facial scans were taken with a facial scanner in an upright position. The 2D standardized frontal photographs were taken at a fixed distance from patients using a camera fixed to a cephalometric apparatus. The 2D integrated facial models were created using digital photo wrapping of frontal photographs on the corresponding CBCT images. The 3D integrated facial models were created using the integration process of 3D facial scans on the CBCT images. On the integrated facial models, sixteen soft tissue landmarks were identified, and the vertical, horizontal, oblique and angular distances between soft tissue landmarks were compared among the 2D facial models and 3D facial models, and CBCT images. Result: The results showed no significant differences of linear and angular measurements among CBCT images, 2D and 3D facial models except for Se-Sn vertical linear measurement which showed significant difference for the 3D facial models. The Bland-Altman plots showed that all measurements were within the limit of agreement. For 3D facial model, all Bland-Altman plots showed that systematic bias was less than 2.0 mm and 2.0° except for Se-Sn linear vertical measurement. For 2D facial model, the Bland-Altman plots of 6 out of 11 of the angular measurements showed systematic bias of more than 2.0°. Conclusion: The facial scan taken with facial scanner showed a clinically acceptable performance. The digital 2D photo wrapping has limitations in clinical use compared to 3D facial scans.

Multi-attribute Face Editing using Facial Masks (얼굴 마스크 정보를 활용한 다중 속성 얼굴 편집)

  • Ambardi, Laudwika;Park, In Kyu;Hong, Sungeun
    • Journal of Broadcast Engineering
    • /
    • v.27 no.5
    • /
    • pp.619-628
    • /
    • 2022
  • Although face recognition and face generation have been growing in popularity, the privacy issues of using facial images in the wild have been a concurrent topic. In this paper, we propose a face editing network that can reduce privacy issues by generating face images with various properties from a small number of real face images and facial mask information. Unlike the existing methods of learning face attributes using a lot of real face images, the proposed method generates new facial images using a facial segmentation mask and texture images from five parts as styles. The images are then trained with our network to learn the styles and locations of each reference image. Once the proposed framework is trained, we can generate various face images using only a small number of real face images and segmentation information. In our extensive experiments, we show that the proposed method can not only generate new faces, but also localize facial attribute editing, despite using very few real face images.

A Facial Animation System Using 3D Scanned Data (3D 스캔 데이터를 이용한 얼굴 애니메이션 시스템)

  • Gu, Bon-Gwan;Jung, Chul-Hee;Lee, Jae-Yun;Cho, Sun-Young;Lee, Myeong-Won
    • The KIPS Transactions:PartA
    • /
    • v.17A no.6
    • /
    • pp.281-288
    • /
    • 2010
  • In this paper, we describe the development of a system for generating a 3-dimensional human face using 3D scanned facial data and photo images, and morphing animation. The system comprises a facial feature input tool, a 3-dimensional texture mapping interface, and a 3-dimensional facial morphing interface. The facial feature input tool supports texture mapping and morphing animation - facial morphing areas between two facial models are defined by inputting facial feature points interactively. The texture mapping is done first by means of three photo images - a front and two side images - of a face model. The morphing interface allows for the generation of a morphing animation between corresponding areas of two facial models after texture mapping. This system allows users to interactively generate morphing animations between two facial models, without programming, using 3D scanned facial data and photo images.

Detection of Facial Direction for Automatic Image Arrangement (이미지 자동배치를 위한 얼굴 방향성 검출)

  • 동지연;박지숙;이환용
    • Journal of Information Technology Applications and Management
    • /
    • v.10 no.4
    • /
    • pp.135-147
    • /
    • 2003
  • With the development of multimedia and optical technologies, application systems with facial features hare been increased the interests of researchers, recently. The previous research efforts in face processing mainly use the frontal images in order to recognize human face visually and to extract the facial expression. However, applications, such as image database systems which support queries based on the facial direction and image arrangement systems which place facial images automatically on digital albums, deal with the directional characteristics of a face. In this paper, we propose a method to detect facial directions by using facial features. In the proposed method, the facial trapezoid is defined by detecting points for eyes and a lower lip. Then, the facial direction formula, which calculates the right and left facial direction, is defined by the statistical data about the ratio of the right and left area in facial trapezoids. The proposed method can give an accurate estimate of horizontal rotation of a face within an error tolerance of $\pm1.31$ degree and takes an average execution time of 3.16 sec.

  • PDF

Shift-invariant face recognition based on the karhunen-loeve approximationof amplitude spectra of fourier-transformed faces (Fourier 변환된 얼굴의 진폭스펙트럼의 karhunen-loeve 근사 방법에 기초한 변위불변적 얼굴인식)

  • 심영미;장주석;김종규
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.35C no.3
    • /
    • pp.97-107
    • /
    • 1998
  • In face recognition based on the Karhunen-Loeve approximation, amplitudespectra of Fourier transformed facial images were used. We found taht the use of amplitude spetra gives not only the shift-invariance property but also some improvment of recognition rate. This is because the distance between the varing faces of a person compared with that between the different persons perfomed computer experiments on face recognitio with varing facial images obtained from total 55 male and 25 females. We confirmed that the use of amplitude spectra of Fourier-trnsformed facial imagesgives better recognition rate for avariety of varying facial images including shifted ones than the use of direct facial images does.

  • PDF

The Accuracy of Recognizing Emotion From Korean Standard Facial Expression (한국인 표준 얼굴 표정 이미지의 감성 인식 정확률)

  • Lee, Woo-Ri;Whang, Min-Cheol
    • The Journal of the Korea Contents Association
    • /
    • v.14 no.9
    • /
    • pp.476-483
    • /
    • 2014
  • The purpose of this study was to make a suitable images for korean emotional expressions. KSFI(Korean Standard Facial Image)-AUs was produced from korean standard apperance and FACS(Facial Action coding system)-AUs. For the objectivity of KSFI, the survey was examined about emotion recognition rate and contribution of emotion recognition in facial elements from six-basic emotional expression images(sadness, happiness, disgust, fear, anger and surprise). As a result of the experiment, the images of happiness, surprise, sadness and anger which had shown higher accuracy. Also, emotional recognition rate was mainly decided by the facial element of eyes and a mouth. Through the result of this study, KSFI contents which could be combined AU images was proposed. In this future, KSFI would be helpful contents to improve emotion recognition rate.

Model Verification Algorithm for ATM Security System (ATM 보안 시스템을 위한 모델 인증 알고리즘)

  • Jeong, Heon;Lim, Chun-Hwan;Pyeon, Suk-Bum
    • Journal of the Institute of Electronics Engineers of Korea TE
    • /
    • v.37 no.3
    • /
    • pp.72-78
    • /
    • 2000
  • In this study, we propose a model verification algorithm based on DCT and neural network for ATM security system. We construct database about facial images after capturing thirty persons facial images in the same lumination and distance. To simulate model verification, we capture four learning images and test images per a man. After detecting edge in facial images, we detect a characteristic area of square shape using edge distribution in facial images. Characteristic area contains eye bows, eyes, nose, mouth and cheek. We extract characteristic vectors to calculate diagonally coefficients sum after obtaining DCT coefficients about characteristic area. Characteristic vectors is normalized between +1 and -1, and then used for input vectors of neural networks. Not considering passwords, simulations results showed 100% verification rate when facial images were learned and 92% verification rate when facial images weren't learned. But considering passwords, the proposed algorithm showed 100% verification rate in case of two simulations.

  • PDF

A Three-Dimensional Facial Modeling and Prediction System (3차원 얼굴 모델링과 예측 시스템)

  • Gu, Bon-Gwan;Jeong, Cheol-Hui;Cho, Sun-Young;Lee, Myeong-Won
    • Journal of the Korea Computer Graphics Society
    • /
    • v.17 no.1
    • /
    • pp.9-16
    • /
    • 2011
  • In this paper, we describe the development of a system for generating a 3-dimensional human face and predicting it's appearance as it ages over subsequent years using 3D scanned facial data and photo images. It is composed of 3-dimensional texture mapping functions, a facial definition parameter input tool, and 3-dimensional facial prediction algorithms. With the texture mapping functions, we can generate a new model of a given face at a specified age using a scanned facial model and photo images. The texture mapping is done using three photo images - a front and two side images of a face. The facial definition parameter input tool is a user interface necessary for texture mapping and used for matching facial feature points between photo images and a 3D scanned facial model in order to obtain material values in high resolution. We have calculated material values for future facial models and predicted future facial models in high resolution with a statistical analysis using 100 scanned facial models.