• Title/Summary/Keyword: facial analysis system

Search Result 230, Processing Time 0.024 seconds

Emotion Recognition using Facial Thermal Images

  • Eom, Jin-Sup;Sohn, Jin-Hun
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.3
    • /
    • pp.427-435
    • /
    • 2012
  • The aim of this study is to investigate facial temperature changes induced by facial expression and emotional state in order to recognize a persons emotion using facial thermal images. Background: Facial thermal images have two advantages compared to visual images. Firstly, facial temperature measured by thermal camera does not depend on skin color, darkness, and lighting condition. Secondly, facial thermal images are changed not only by facial expression but also emotional state. To our knowledge, there is no study to concurrently investigate these two sources of facial temperature changes. Method: 231 students participated in the experiment. Four kinds of stimuli inducing anger, fear, boredom, and neutral were presented to participants and the facial temperatures were measured by an infrared camera. Each stimulus consisted of baseline and emotion period. Baseline period lasted during 1min and emotion period 1~3min. In the data analysis, the temperature differences between the baseline and emotion state were analyzed. Eyes, mouth, and glabella were selected for facial expression features, and forehead, nose, cheeks were selected for emotional state features. Results: The temperatures of eyes, mouth, glanella, forehead, and nose area were significantly decreased during the emotional experience and the changes were significantly different by the kind of emotion. The result of linear discriminant analysis for emotion recognition showed that the correct classification percentage in four emotions was 62.7% when using both facial expression features and emotional state features. The accuracy was slightly but significantly decreased at 56.7% when using only facial expression features, and the accuracy was 40.2% when using only emotional state features. Conclusion: Facial expression features are essential in emotion recognition, but emotion state features are also important to classify the emotion. Application: The results of this study can be applied to human-computer interaction system in the work places or the automobiles.

Automatic Facial Expression Recognition using Tree Structures for Human Computer Interaction (HCI를 위한 트리 구조 기반의 자동 얼굴 표정 인식)

  • Shin, Yun-Hee;Ju, Jin-Sun;Kim, Eun-Yi;Kurata, Takeshi;Jain, Anil K.;Park, Se-Hyun;Jung, Kee-Chul
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.12 no.3
    • /
    • pp.60-68
    • /
    • 2007
  • In this paper, we propose an automatic facial expressions recognition system to analyze facial expressions (happiness, disgust, surprise and neutral) using tree structures based on heuristic rules. The facial region is first obtained using skin-color model and connected-component analysis (CCs). Thereafter the origins of user's eyes are localized using neural network (NN)-based texture classifier, then the facial features using some heuristics are localized. After detection of facial features, the facial expression recognition are performed using decision tree. To assess the validity of the proposed system, we tested the proposed system using 180 facial image in the MMI, JAFFE, VAK DB. The results show that our system have the accuracy of 93%.

  • PDF

The Effects of Perceived Facial Attractiveness and Appropriateness of Clothing on the Task Performance Evaluation mediated by Likability and the Trait Evaluation (지각된 얼굴 매력성과 의복 적절성이 호감도, 특질 판단을 매개하여 과제 수행능력 판단에 미치는 영향)

  • 정명선;김재숙
    • Journal of the Korean Society of Costume
    • /
    • v.51 no.8
    • /
    • pp.77-91
    • /
    • 2001
  • The purpose of this study was to investigate the effects of the perceived facial attractiveness and appropriateness of clothing on the evaluation of task performance of target person mediated by subjects'likability toward and trait evaluation of the target person. The facial attractiveness of the female university students were used as index of physical attractiveness in this study. Three levels of facial attractiveness was manipulated based on the judgements by 30 female university students. Four types of clothes were selected perceived appropriate for two assumed situations by female university students. Three female faces having high. medium, and low attractiveness were simulated with the same body dressed four types of clothing respectively using CAD system, and a total of 12 stimulus persons were created. The design for the experiment was a $3\tiems4\times2$ randomaized factorial. with three levels of facial attractiveness(high, medium, low), and four types attire(formal-masculine, formal-feminine, casual-masculine, casual-feminine), two kinds of context (job interview, dating) in which perceptions were occurred. The subjects of this study was 524 male and female(262 of male, 262 of female) university students from 3 universities in Kwangju, Korea. The data were analysed using factor analysis. descriptive statistics, regression, path analysis. The results were as follows : 1. In bogus job interview. the direct effect of perceived facial attractiveness on task performance evaluation was .175 and the indirect effect mediated by likability and trait evaluation was .285 in path analysis model. The direct effect of perceived appropriateness of clothing on task performance evaluation was .111 and the indirect effect mediated by likability only was .0564 in pass analysis model. 2. In dating situation, the direct effect of perceived facial attractiveness on task performance evaluation was .355, the indirect effect mediated by likability and trait evaluation was .188 in path analysis model. The direct effect of perceived appropriateness of clothing on task performance evaluation was .108, the indirect effect mediated by likability and trait evaluation was .060 in Pass analysis.

  • PDF

ORTHODONTIC TREATMENT RELATED TO FACIAL PATTERNS (안모유형에 따른 교정치료)

  • Hwang, Chung-Ju
    • The korean journal of orthodontics
    • /
    • v.18 no.2
    • /
    • pp.475-488
    • /
    • 1988
  • Certain malocclusion are associated with specific "facial type," and it is important for the clinician to classify the common facial characteristic of each patient. Because the reaction to treatment mechanics and the stability of the denture is depended upon the analysis of the facial pattern. Basically, there are 3 district facial types or patterns under which almost all malocclusion can be classified. 1. mesofacial is the most average growth. 2. brachyfacial which is a horizontal growth pattern has a week muscle, with dental arch, deep bite. 3. dolichofacial which is a vertical growth pattern has a strong muscle, narrow dental arch, open bite. Brachyfacial pattern show a resistant to mandibular rotation during treatment can accept a more protrusive denture and are prominantly nonextraction, whereas dolichofacial patterns tend to open during treatment require a more retracted denture in order to assure post-treatment stability. Brachyfacial pattern would better treat to use extrusive force system, whereas dolichofacial pattern treat to use intrusive force system with head gear and intermaxillary elastics.

  • PDF

Facial Feature Tracking from a General USB PC Camera (범용 USB PC 카메라를 이용한 얼굴 특징점의 추적)

  • 양정석;이칠우
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2001.10b
    • /
    • pp.412-414
    • /
    • 2001
  • In this paper, we describe an real-time facial feature tracker. We only used a general USB PC Camera without a frame grabber. The system has achieved a rate of 8+ frames/second without any low-level library support. It tracks pupils, nostrils and corners of the lip. The signal from USB Camera is YUV 4:2:0 vertical Format. we converted the signal into RGB color model to display the image and We interpolated V channel of the signal to be used for extracting a facial region. and we analysis 2D blob features in the Y channel, the luminance of the image with geometric restriction to locate each facial feature within the detected facial region. Our method is so simple and intuitive that we can make the system work in real-time.

  • PDF

Development of Emotion Recongition System Using Facial Image (얼굴 영상을 이용한 감정 인식 시스템 개발)

  • Kim, M.H.;Joo, Y.H.;Park, J.B.;Lee, J.;Cho, Y.J.
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.15 no.2
    • /
    • pp.191-196
    • /
    • 2005
  • Although the technology for emotion recognition is important one which was demanded in various fields, it still remains as the unsolved problems. Especially, there is growing demand for emotion recognition technology based on racial image. The facial image based emotion recognition system is complex system comprised of various technologies. Therefore, various techniques such that facial image analysis, feature vector extraction, pattern recognition technique, and etc, are needed in order to develop this system. In this paper, we propose new emotion recognition system based un previously studied facial image analysis technique. The proposed system recognizes the emotion by using the fuzzy classifier. The facial image database is built up and the performance of the proposed system is verified by using built database.

Influencing Factors Analysis of Facial Nerve Function after the Microsurgical Resection of Acoustic Neuroma

  • Hong, WenMing;Cheng, HongWei;Wang, XiaoJie;Feng, ChunGuo
    • Journal of Korean Neurosurgical Society
    • /
    • v.60 no.2
    • /
    • pp.165-173
    • /
    • 2017
  • Objective : To explore and analyze the influencing factors of facial nerve function retainment after microsurgery resection of acoustic neurinoma. Methods : Retrospective analysis of our hospital 105 acoustic neuroma cases from October, 2006 to January 2012, in the group all patients were treated with suboccipital sigmoid sinus approach to acoustic neuroma microsurgery resection. We adopted researching individual patient data, outpatient review and telephone followed up and the House-Brackmann grading system to evaluate and analyze the facial nerve function. Results : Among 105 patients in this study group, complete surgical resection rate was 80.9% (85/105), subtotal resection rate was 14.3% (15/105), and partial resection rate 4.8% (5/105). The rate of facial nerve retainment on neuroanatomy was 95.3% (100/105) and the mortality rate was 2.1% (2/105). Facial nerve function when the patient is discharged from the hospital, also known as immediate facial nerve function which was graded in House-Brackmann : excellent facial nerve function (House-Brackmann I-II level) cases accounted for 75.2% (79/105), facial nerve function III-IV level cases accounted for 22.9% (24/105), and V-VI cases accounted for 1.9% (2/105). Patients were followed up for more than one year, with excellent facial nerve function retention rate (H-B I-II level) was 74.4% (58/78). Conclusion : Acoustic neuroma patients after surgery, the long-term (${\geq}1year$) facial nerve function excellent retaining rate was closely related with surgical proficiency, post-operative immediate facial nerve function, diameter of tumor and whether to use electrophysiological monitoring techniques; while there was no significant correlation with the patient's age, surgical approach, whether to stripping the internal auditory canal, whether there was cystic degeneration, tumor recurrence, whether to merge with obstructive hydrocephalus and the length of the duration of symptoms.

Development of a Recognition System of Smile Facial Expression for Smile Treatment Training (웃음 치료 훈련을 위한 웃음 표정 인식 시스템 개발)

  • Li, Yu-Jie;Kang, Sun-Kyung;Kim, Young-Un;Jung, Sung-Tae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.4
    • /
    • pp.47-55
    • /
    • 2010
  • In this paper, we proposed a recognition system of smile facial expression for smile treatment training. The proposed system detects face candidate regions by using Haar-like features from camera images. After that, it verifies if the detected face candidate region is a face or non-face by using SVM(Support Vector Machine) classification. For the detected face image, it applies illumination normalization based on histogram matching in order to minimize the effect of illumination change. In the facial expression recognition step, it computes facial feature vector by using PCA(Principal Component Analysis) and recognizes smile expression by using a multilayer perceptron artificial network. The proposed system let the user train smile expression by recognizing the user's smile expression in real-time and displaying the amount of smile expression. Experimental result show that the proposed system improve the correct recognition rate by using face region verification based on SVM and using illumination normalization based on histogram matching.

Realtime Facial Expression Control of 3D Avatar by PCA Projection of Motion Data (모션 데이터의 PCA투영에 의한 3차원 아바타의 실시간 표정 제어)

  • Kim Sung-Ho
    • Journal of Korea Multimedia Society
    • /
    • v.7 no.10
    • /
    • pp.1478-1484
    • /
    • 2004
  • This paper presents a method that controls facial expression in realtime of 3D avatar by having the user select a sequence of facial expressions in the space of facial expressions. The space of expression is created from about 2400 frames of facial expressions. To represent the state of each expression, we use the distance matrix that represents the distances between pairs of feature points on the face. The set of distance matrices is used as the space of expressions. Facial expression of 3D avatar is controled in real time as the user navigates the space. To help this process, we visualized the space of expressions in 2D space by using the Principal Component Analysis(PCA) projection. To see how effective this system is, we had users control facial expressions of 3D avatar by using the system. This paper evaluates the results.

  • PDF

Comparison Analysis of Four Face Swapping Models for Interactive Media Platform COX (인터랙티브 미디어 플랫폼 콕스에 제공될 4가지 얼굴 변형 기술의 비교분석)

  • Jeon, Ho-Beom;Ko, Hyun-kwan;Lee, Seon-Gyeong;Song, Bok-Deuk;Kim, Chae-Kyu;Kwon, Ki-Ryong
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.5
    • /
    • pp.535-546
    • /
    • 2019
  • Recently, there have been a lot of researches on the whole face replacement system, but it is not easy to obtain stable results due to various attitudes, angles and facial diversity. To produce a natural synthesis result when replacing the face shown in the video image, technologies such as face area detection, feature extraction, face alignment, face area segmentation, 3D attitude adjustment and facial transposition should all operate at a precise level. And each technology must be able to be interdependently combined. The results of our analysis show that the difficulty of implementing the technology and contribution to the system in facial replacement technology has increased in facial feature point extraction and facial alignment technology. On the other hand, the difficulty of the facial transposition technique and the three-dimensional posture adjustment technique were low, but showed the need for development. In this paper, we propose four facial replacement models such as 2-D Faceswap, OpenPose, Deekfake, and Cycle GAN, which are suitable for the Cox platform. These models have the following features; i.e. these models include a suitable model for front face pose image conversion, face pose image with active body movement, and face movement with right and left side by 15 degrees, Generative Adversarial Network.