• Title/Summary/Keyword: facial areas

Search Result 193, Processing Time 0.028 seconds

Region-Based Facial Expression Recognition in Still Images

  • Nagi, Gawed M.;Rahmat, Rahmita O.K.;Khalid, Fatimah;Taufik, Muhamad
    • Journal of Information Processing Systems
    • /
    • v.9 no.1
    • /
    • pp.173-188
    • /
    • 2013
  • In Facial Expression Recognition Systems (FERS), only particular regions of the face are utilized for discrimination. The areas of the eyes, eyebrows, nose, and mouth are the most important features in any FERS. Applying facial features descriptors such as the local binary pattern (LBP) on such areas results in an effective and efficient FERS. In this paper, we propose an automatic facial expression recognition system. Unlike other systems, it detects and extracts the informative and discriminant regions of the face (i.e., eyes, nose, and mouth areas) using Haar-feature based cascade classifiers and these region-based features are stored into separate image files as a preprocessing step. Then, LBP is applied to these image files for facial texture representation and a feature-vector per subject is obtained by concatenating the resulting LBP histograms of the decomposed region-based features. The one-vs.-rest SVM, which is a popular multi-classification method, is employed with the Radial Basis Function (RBF) for facial expression classification. Experimental results show that this approach yields good performance for both frontal and near-frontal facial images in terms of accuracy and time complexity. Cohn-Kanade and JAFFE, which are benchmark facial expression datasets, are used to evaluate this approach.

The Measurement of Korean Face Skin Rigidity for a Robotic Headform of Respiratory Protective Device Testing (호흡보호구 평가용 얼굴 로봇을 위한 한국인 얼굴 피부의 경도 측정)

  • Eun-Jin Jeon;Young-jae Jung;Ah-lam Lee;Hee-Eun Kim;Hee-Cheon You
    • Fashion & Textile Research Journal
    • /
    • v.25 no.2
    • /
    • pp.248-254
    • /
    • 2023
  • This study aims to measure the skin rigidity of different facial areas among Koreans and propose guidelines for each area's skin rigidity that can be applied with a facial robot for testing respiratory protective devices. The facial skin rigidity of 40 participants, which included 20 men and 20 women, aged 20 to 50, was analyzed. The rigidity measurement was conducted in 13 facial areas, including six areas in contact with the mask and seven non-contact areas, by referring to the facial measurement guidelines of Size Korea. The facial rigidity was measured using the Durometer RX-1600-OO while in a supine position. The measurement procedure involved contacting the durometer vertically with the reference point, repeating the measurement of the same area five times, and using the average of three values whose variability was between 0.4 and 4.2 Shore OO. The rigidity data analysis used precision analysis, descriptive statistics analysis, and mixed-effect ANOVA. The analysis confirmed the rigidity of the 13 measurement areas, with the highest rigidity of the face being at the nose and forehead points, with values of 51.2 and 50.8, respectively, and the lowest rigidity being at the chin and center of the cheek points, with values of 19.2 and 20.7, respectively. Significant differences between gender groups were observed in four areas: the tip of the nose, the point below the chin, the area below the lower jaw, and the inner concha.

A Facial Feature Area Extraction Method for Improving Face Recognition Rate in Camera Image (일반 카메라 영상에서의 얼굴 인식률 향상을 위한 얼굴 특징 영역 추출 방법)

  • Kim, Seong-Hoon;Han, Gi-Tae
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.5
    • /
    • pp.251-260
    • /
    • 2016
  • Face recognition is a technology to extract feature from a facial image, learn the features through various algorithms, and recognize a person by comparing the learned data with feature of a new facial image. Especially, in order to improve the rate of face recognition, face recognition requires various processing methods. In the training stage of face recognition, feature should be extracted from a facial image. As for the existing method of extracting facial feature, linear discriminant analysis (LDA) is being mainly used. The LDA method is to express a facial image with dots on the high-dimensional space, and extract facial feature to distinguish a person by analyzing the class information and the distribution of dots. As the position of a dot is determined by pixel values of a facial image on the high-dimensional space, if unnecessary areas or frequently changing areas are included on a facial image, incorrect facial feature could be extracted by LDA. Especially, if a camera image is used for face recognition, the size of a face could vary with the distance between the face and the camera, deteriorating the rate of face recognition. Thus, in order to solve this problem, this paper detected a facial area by using a camera, removed unnecessary areas using the facial feature area calculated via a Gabor filter, and normalized the size of the facial area. Facial feature were extracted through LDA using the normalized facial image and were learned through the artificial neural network for face recognition. As a result, it was possible to improve the rate of face recognition by approx. 13% compared to the existing face recognition method including unnecessary areas.

Effects of the facial expression presenting types and facial areas on the emotional recognition (얼굴 표정의 제시 유형과 제시 영역에 따른 정서 인식 효과)

  • Lee, Jung-Hun;Park, Soo-Jin;Han, Kwang-Hee;Ghim, Hei-Rhee;Cho, Kyung-Ja
    • Science of Emotion and Sensibility
    • /
    • v.10 no.1
    • /
    • pp.113-125
    • /
    • 2007
  • The aim of the experimental studies described in this paper is to investigate the effects of the face/eye/mouth areas using dynamic facial expressions and static facial expressions on emotional recognition. Using seven-seconds-displays, experiment 1 for basic emotions and experiment 2 for complex emotions are executed. The results of two experiments supported that the effects of dynamic facial expressions are higher than static one on emotional recognition and indicated the higher emotional recognition effects of eye area on dynamic images than mouth area. These results suggest that dynamic properties should be considered in emotional study with facial expressions for not only basic emotions but also complex emotions. However, we should consider the properties of emotion because each emotion did not show the effects of dynamic image equally. Furthermore, this study let us know which facial area shows emotional states more correctly is according to the feature emotion.

  • PDF

A Facial Animation System Using 3D Scanned Data (3D 스캔 데이터를 이용한 얼굴 애니메이션 시스템)

  • Gu, Bon-Gwan;Jung, Chul-Hee;Lee, Jae-Yun;Cho, Sun-Young;Lee, Myeong-Won
    • The KIPS Transactions:PartA
    • /
    • v.17A no.6
    • /
    • pp.281-288
    • /
    • 2010
  • In this paper, we describe the development of a system for generating a 3-dimensional human face using 3D scanned facial data and photo images, and morphing animation. The system comprises a facial feature input tool, a 3-dimensional texture mapping interface, and a 3-dimensional facial morphing interface. The facial feature input tool supports texture mapping and morphing animation - facial morphing areas between two facial models are defined by inputting facial feature points interactively. The texture mapping is done first by means of three photo images - a front and two side images - of a face model. The morphing interface allows for the generation of a morphing animation between corresponding areas of two facial models after texture mapping. This system allows users to interactively generate morphing animations between two facial models, without programming, using 3D scanned facial data and photo images.

Developmental Changes in Emotional-States and Facial Expression (정서 상태와 얼굴표정간의 연결 능력의 발달)

  • Park, Soo-Jin;Song, In-Hae;Ghim, Hei-Rhee;Cho, Kyung-Ja
    • Science of Emotion and Sensibility
    • /
    • v.10 no.1
    • /
    • pp.127-133
    • /
    • 2007
  • The present study investigated whether the emotional states reading ability through facial expression changes by age(3-, 5-year-old and university student groups), sex(male, female), facial expression's presenting areas(face, eyes) and the type of emotions(basic emotions, complex emotions). 32 types of emotional state's facial expressions which are linked relatively strong with the emotional vocabularies were used as stimuli. Stimuli were collected by taking photographs of professional actors facial expression performance. Each individuals were presented with stories which set off certain emotions, and then were asked to choose a facial expression that the principal character would have made for the occasion presented in stories. The result showed that the ability of facial expression reading improves as the age get higher. Also, they performed better with the condition of face than eyes, and basic emotions than complex emotions. While female doesn't show any performance difference with the presenting areas, male shows better performance in case of facial condition compared with eye condition. The results demonstrate that age, facial expression's presenting areas and the type of emotions effect on estimation of other people's emotion through facial expressions.

  • PDF

A Facial Expression Recognition Method Using Two-Stream Convolutional Networks in Natural Scenes

  • Zhao, Lixin
    • Journal of Information Processing Systems
    • /
    • v.17 no.2
    • /
    • pp.399-410
    • /
    • 2021
  • Aiming at the problem that complex external variables in natural scenes have a greater impact on facial expression recognition results, a facial expression recognition method based on two-stream convolutional neural network is proposed. The model introduces exponentially enhanced shared input weights before each level of convolution input, and uses soft attention mechanism modules on the space-time features of the combination of static and dynamic streams. This enables the network to autonomously find areas that are more relevant to the expression category and pay more attention to these areas. Through these means, the information of irrelevant interference areas is suppressed. In order to solve the problem of poor local robustness caused by lighting and expression changes, this paper also performs lighting preprocessing with the lighting preprocessing chain algorithm to eliminate most of the lighting effects. Experimental results on AFEW6.0 and Multi-PIE datasets show that the recognition rates of this method are 95.05% and 61.40%, respectively, which are better than other comparison methods.

Development of a Deep Learning-Based Automated Analysis System for Facial Vitiligo Treatment Evaluation (안면 백반증 치료 평가를 위한 딥러닝 기반 자동화 분석 시스템 개발)

  • Sena Lee;Yeon-Woo Heo;Solam Lee;Sung Bin Park
    • Journal of Biomedical Engineering Research
    • /
    • v.45 no.2
    • /
    • pp.95-100
    • /
    • 2024
  • Vitiligo is a condition characterized by the destruction or dysfunction of melanin-producing cells in the skin, resulting in a loss of skin pigmentation. Facial vitiligo, specifically affecting the face, significantly impacts patients' appearance, thereby diminishing their quality of life. Evaluating the efficacy of facial vitiligo treatment typically relies on subjective assessments, such as the Facial Vitiligo Area Scoring Index (F-VASI), which can be time-consuming and subjective due to its reliance on clinical observations like lesion shape and distribution. Various machine learning and deep learning methods have been proposed for segmenting vitiligo areas in facial images, showing promising results. However, these methods often struggle to accurately segment vitiligo lesions irregularly distributed across the face. Therefore, our study introduces a framework aimed at improving the segmentation of vitiligo lesions on the face and providing an evaluation of vitiligo lesions. Our framework for facial vitiligo segmentation and lesion evaluation consists of three main steps. Firstly, we perform face detection to minimize background areas and identify the face area of interest using high-quality ultraviolet photographs. Secondly, we extract facial area masks and vitiligo lesion masks using a semantic segmentation network-based approach with the generated dataset. Thirdly, we automatically calculate the vitiligo area relative to the facial area. We evaluated the performance of facial and vitiligo lesion segmentation using an independent test dataset that was not included in the training and validation, showing excellent results. The framework proposed in this study can serve as a useful tool for evaluating the diagnosis and treatment efficacy of vitiligo.

Characteristics of Acupuncture at ST36 on Facial Thermography of Health Subject (족삼리(足三里) (ST36) 자침(刺鍼)이 안면부(顔面部) 한열변화(寒熱變化)에 미치는 영향(影響))

  • Kim Yong-Tae;Kim Jae-Hyo;Hwang Jae-Ho;Kim Kyung-Sik;Sohn In-Cheul
    • Korean Journal of Acupuncture
    • /
    • v.19 no.2
    • /
    • pp.13-33
    • /
    • 2002
  • This study was examined for effects of acupuncture of Zusanli (ST36) on the facial thermography in health subjects. The volunteers who participating in this study had taken rest for 20 - 30 mins in room temperature (23-$25^{\circ}C$) before the examination and informed them what to prohibit smoking, drinking and administration of drug for the previous day. The thermography of face was taken using Infra-Red Imaging System (IR 2000, MEDI-CORE Co., Korea) by time interval of 15 minutes at 15 min before, just before and 15 min after, 30 min after and 45 min after acupuncture stimulation. Acupuncture was applied to the left ST36 for 30 mins. The results showed that acupuncture of ST36 significantly decreased the temperature of all the areas of facial surface comparing to those of control group. Also, it was observed that the quantities of thermal changes following acupuncture of ST36 been increased significantly at the A1, A4, A6, A7 and A9 ROIs (region of interest) comparing that of control group. Observed the thermography classified by ROI, it was clear the fact that acupuncture of ST36 could modulate the specific areas concerning to the facial pathway of Stomach Meridian, because the thermal responses following acupuncture of ST36 were specific at the A1, A2, A5 and A9 ROIs, relatively. These results suggest that acupuncture of ST36 may modulate thermal distributions and changes of facial areas concerned with Stomach Meridian.

  • PDF

Effects of Acupuncture at Hap-Kok(LI4) on the Skin Temperature Changes of face divided by 17 area randomly in Man (합곡(合谷) 자침(刺鍼)이 면부(面部)의 구역별(區域別) 영역(領域) 온도변화(溫度變化)에 미치는 영향(影響))

  • Hong, Kyong-Jin;An, Seong-Hun;Kim, Jae-Hyo;Hwang, Jae-Ho;Kim, Kyong-Sik;Sobn, In-chul
    • Journal of Acupuncture Research
    • /
    • v.19 no.1
    • /
    • pp.24-38
    • /
    • 2002
  • This study was undertaken to examine the effects of acupuncture at LI4 on temperature changes of the facial surface randomly divided into 17 areas. The volunteers who participating in this study had taken rest for 20 - 30 min in room temperature ($23-25^{\circ}C$) before acupuncture and informed them what to prohibit smoking, drinking and drug for the previous one day. The Temperature of facial surface was measured by using Digital Thermography IR 2000 (Meridian Co., Korea) at 5 min before and immediately, 5, 10, and 15 min after acupuncture on LI4. The results of this study showed that there was no significancy in thermal changes of facial surface randomly divided into 17 areas, but different significantly in the aggregate changes at the difference of the thermal changes on facial surface (p < 0.001). The difference of aggregate change was increased time-dependent and the changes at 1st, 3rd, and 13th area were comparatively smaller than the other areas. However, the changes at 6th, 8th, 10th and 15th area were more increased than the others. This study suggests that acupuncture at LI 4 help human being increase the reaction to maintain thermal homeostasis in facial surface and the ability to treat at these area's disease.

  • PDF