• Title/Summary/Keyword: Facial analysis

Search Result 1,118, Processing Time 0.031 seconds

Improved Two-Phase Framework for Facial Emotion Recognition

  • Yoon, Hyunjin;Park, Sangwook;Lee, Yongkwi;Han, Mikyong;Jang, Jong-Hyun
    • ETRI Journal
    • /
    • v.37 no.6
    • /
    • pp.1199-1210
    • /
    • 2015
  • Automatic emotion recognition based on facial cues, such as facial action units (AUs), has received huge attention in the last decade due to its wide variety of applications. Current computer-based automated two-phase facial emotion recognition procedures first detect AUs from input images and then infer target emotions from the detected AUs. However, more robust AU detection and AU-to-emotion mapping methods are required to deal with the error accumulation problem inherent in the multiphase scheme. Motivated by our key observation that a single AU detector does not perform equally well for all AUs, we propose a novel two-phase facial emotion recognition framework, where the presence of AUs is detected by group decisions of multiple AU detectors and a target emotion is inferred from the combined AU detection decisions. Our emotion recognition framework consists of three major components - multiple AU detection, AU detection fusion, and AU-to-emotion mapping. The experimental results on two real-world face databases demonstrate an improved performance over the previous two-phase method using a single AU detector in terms of both AU detection accuracy and correct emotion recognition rate.

Structuring Program to Improve Unbalance of Woman's Face (여성 얼굴의 불균형 개선을 위한 프로그램 구축)

  • Kim, Ae-Kyung;Lee, Kyung-Hee
    • Fashion & Textile Research Journal
    • /
    • v.13 no.3
    • /
    • pp.398-408
    • /
    • 2011
  • This study shows that the self-satisfaction individually is rising and social life is attracted effective and successful in image making field by structuring the facial image improvement program through experimental study in order to improve unbalance of women's face. Experiment is conducted by electing 3 samples for 12 weeks and analyzing the measurement and visual analysis, infrared thermography, and evaluation of experts in order to check the facial unbalance. Subject 1 had the effect at approximately in 4 weeks with the severely distorted chin line and mouth appendage. The facial outline became softer to turn entire image to be softer and more feminine. Subject 2 had the severe distortion of location and size of eyes and nose. But the skin was getting better at first, followed by eyes getting clearer with the location changed in left and right. Subject 3 had the twisted nose and lower chin, but after two weeks, the eye area and skin were better and the width of left and right chin was similarly changed. On the basis of the above research result, the program to effectively improve the image was structured and presented with the resolution of facial unbalance. Program is consist of the training of breathing method, face washing method, facial muscle exercise.

Study for Classification of Facial Expression using Distance Features of Facial Landmarks (얼굴 랜드마크 거리 특징을 이용한 표정 분류에 대한 연구)

  • Bae, Jin Hee;Wang, Bo Hyeon;Lim, Joon S.
    • Journal of IKEEE
    • /
    • v.25 no.4
    • /
    • pp.613-618
    • /
    • 2021
  • Facial expression recognition has long been established as a subject of continuous research in various fields. In this paper, the relationship between each landmark is analyzed using the features obtained by calculating the distance between the facial landmarks in the image, and five facial expressions are classified. We increased data and label reliability based on our labeling work with multiple observers. In addition, faces were recognized from the original data and landmark coordinates were extracted and used as features. A genetic algorithm was used to select features that are relatively more helpful for classification. We performed facial recognition classification and analysis with the method proposed in this paper, which shows the validity and effectiveness of the proposed method.

SURGICAL INDEX FOR BONE SHAVING USING RAPID PROTOTYPING MODEL;TECHNICAL PROPOSAL FOR TREATMENT OF FIBROUS DYSPLASIA (Rapid Prototyping 모델을 이용한 골삭제을 위한 외과적 지표;섬유성 골이형성증 치료를 위한 기술적 제안)

  • Kim, Woon-Kyu
    • Maxillofacial Plastic and Reconstructive Surgery
    • /
    • v.23 no.4
    • /
    • pp.366-375
    • /
    • 2001
  • Bone shaving for surgical correction is general method in facial asymmetrical patient with fibrous dysplasia. Therefore, decision of bone shaving amount on the preoperative planning is very difficult for improvement of ideal occlusal relationship and harmonious face. Preoperative planning of facial asymmetry with fibrous dysplasia is generally confirmed by the simulation surgery based on evaluation of clinical examination, radiographic analysis and analysis of facial study model. However, the accurate postoperative results can not be predicted by this method. By using the computed tomography based RP(rapid prototyping) model, simulation of facial skeleton can be duplicated and 3-dimensional simmulation surgery can be perfomed. After fabrication of postoperative study model by preoperactive bone shaving, preoperative and postoperactive surgical index was made by omnivaccum and clear acrylic resin. Amount of bone shaving is confirmed by superimposition of surgical index at the operation. We performed the surgical correction of facial asymmetry patients with fibrous dysplasia using surgical index and prototyping model and obtained the favorable results.

  • PDF

Emotion Recognition Method Based on Multimodal Sensor Fusion Algorithm

  • Moon, Byung-Hyun;Sim, Kwee-Bo
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.8 no.2
    • /
    • pp.105-110
    • /
    • 2008
  • Human being recognizes emotion fusing information of the other speech signal, expression, gesture and bio-signal. Computer needs technologies that being recognized as human do using combined information. In this paper, we recognized five emotions (normal, happiness, anger, surprise, sadness) through speech signal and facial image, and we propose to method that fusing into emotion for emotion recognition result is applying to multimodal method. Speech signal and facial image does emotion recognition using Principal Component Analysis (PCA) method. And multimodal is fusing into emotion result applying fuzzy membership function. With our experiments, our average emotion recognition rate was 63% by using speech signals, and was 53.4% by using facial images. That is, we know that speech signal offers a better emotion recognition rate than the facial image. We proposed decision fusion method using S-type membership function to heighten the emotion recognition rate. Result of emotion recognition through proposed method, average recognized rate is 70.4%. We could know that decision fusion method offers a better emotion recognition rate than the facial image or speech signal.

Landmark Selection Using CNN-Based Heat Map for Facial Age Prediction (안면 연령 예측을 위한 CNN기반의 히트 맵을 이용한 랜드마크 선정)

  • Hong, Seok-Mi;Yoo, Hyun
    • Journal of Convergence for Information Technology
    • /
    • v.11 no.7
    • /
    • pp.1-6
    • /
    • 2021
  • The purpose of this study is to improve the performance of the artificial neural network system for facial image analysis through the image landmark selection technique. For landmark selection, a CNN-based multi-layer ResNet model for classification of facial image age is required. From the configured ResNet model, a heat map that detects the change of the output node according to the change of the input node is extracted. By combining a plurality of extracted heat maps, facial landmarks related to age classification prediction are created. The importance of each pixel location can be analyzed through facial landmarks. In addition, by removing the pixels with low weights, a significant amount of input data can be reduced.

Correlation of Internal & External Factors with the Beginning Period of Improvement in Idiopathic Facial Paralysis (특발성 안면마비에서 내외적 요인과 호전시기와의 상관관계)

  • Sung, Hee Jin;Lim, Su Sie;Choi, Hyun Young;Lee, Eun Yong;Roh, Jung Du;Lee, Cham Kyul
    • Journal of Acupuncture Research
    • /
    • v.33 no.1
    • /
    • pp.57-68
    • /
    • 2016
  • Objectives : The purpose of this study was to investigate the correlation between patients' characteristics and the beginning period of improvement, as well as contribute to the efficient management of Bell's palsy patients. Methods : The subjects were 94 patients with Bell's palsy. This study was carried out through the use of an administrative database that included patients' characteristics and clinical information. The analysis of the beginning period of improvement by gender, hypertension, diabetes, drinking history, smoking history and facial palsy history was conducted by independent sample t-test. The analysis of the beginning period of improvement by age, House-Brackmann grade, Yanagihara scale and period receiving Korean medical treatment was conducted by Pearson's correlation analysis. Further analysis of the beginning period of improvement by associated symptoms and seasons was conducted by one-way analysis of variance. Results : 1. Significant correlations were not found between the beginning period of improvement and gender, age, season, smoking history, drinking history, facial palsy history, House-Brackmann grade, Yanagihara scale, hypertension, diabetes or associated symptoms. 2. There was significant correlation between the period of receiving Korean medical treatment and the beginning period of improvement. Conclusion : In this study, the earlier that patients received korean medicine treatment after onset, the earlier that the beginning period of improvement could be seen. Therefore, for the efficient management of facial paralysis patients, it is expected to help secure a baseline.

Comparison of Computer and Human Face Recognition According to Facial Components

  • Nam, Hyun-Ha;Kang, Byung-Jun;Park, Kang-Ryoung
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.1
    • /
    • pp.40-50
    • /
    • 2012
  • Face recognition is a biometric technology used to identify individuals based on facial feature information. Previous studies of face recognition used features including the eye, mouth and nose; however, there have been few studies on the effects of using other facial components, such as the eyebrows and chin, on recognition performance. We measured the recognition accuracy affected by these facial components, and compared the differences between computer-based and human-based facial recognition methods. This research is novel in the following four ways compared to previous works. First, we measured the effect of components such as the eyebrows and chin. And the accuracy of computer-based face recognition was compared to human-based face recognition according to facial components. Second, for computer-based recognition, facial components were automatically detected using the Adaboost algorithm and active appearance model (AAM), and user authentication was achieved with the face recognition algorithm based on principal component analysis (PCA). Third, we experimentally proved that the number of facial features (when including eyebrows, eye, nose, mouth, and chin) had a greater impact on the accuracy of human-based face recognition, but consistent inclusion of some feature such as chin area had more influence on the accuracy of computer-based face recognition because a computer uses the pixel values of facial images in classifying faces. Fourth, we experimentally proved that the eyebrow feature enhanced the accuracy of computer-based face recognition. However, the problem of occlusion by hair should be solved in order to use the eyebrow feature for face recognition.

A Retrospective Analysis of 303 Cases of Facial Bone Fracture: Socioeconomic Status and Injury Characteristics

  • Kim, Byeong Jun;Lee, Se Il;Chung, Chan Min
    • Archives of Craniofacial Surgery
    • /
    • v.16 no.3
    • /
    • pp.136-142
    • /
    • 2015
  • Background: The incidence and etiology of facial bone fracture differ widely according to time and geographic setting. Because of this, prevention and management of facial bone fracture requires ongoing research. This study examines the relationship between socioeconomic status and the incidence of facial bone fractures in patients who had been admitted for facial bone fractures. Methods: A retrospective study was performed for all patients admitted for facial bone fracture at the National Medical Center (Seoul, Korea) from 2010 to 2014. We sought correlations amongst age, gender, fracture type, injury mechanism, alcohol consumption, and type of medical insurance. Results: Out of the 303 patients meeting inclusion criteria, 214 (70.6%) patients were enrolled in National Health Insurance (NHI), 46 (15.2%) patients had Medical Aid, and 43 (14.2%) patients were homeless. The main causes of facial bone fractures were accidental trauma (51.4%), physical altercation (23.1%), and traffic accident (14.2%). On Pearson's chi-square test, alcohol consumption was correlated significantly with accidental trauma (p<0.05). And, the ratio of alcohol consumption leading to facial bone fractures differed significantly in the homeless group compared to the NHI group and the Medical Aid group (p<0.05). Conclusion: We found a significant inverse correlation between economic status and the incidence of facial bone fractures caused by alcohol consumption. Our findings indicate that more elaborate guidelines and prevention programs are needed for socioeconomically marginalized populations.

Realtime Facial Expression Control of 3D Avatar by PCA Projection of Motion Data (모션 데이터의 PCA투영에 의한 3차원 아바타의 실시간 표정 제어)

  • Kim Sung-Ho
    • Journal of Korea Multimedia Society
    • /
    • v.7 no.10
    • /
    • pp.1478-1484
    • /
    • 2004
  • This paper presents a method that controls facial expression in realtime of 3D avatar by having the user select a sequence of facial expressions in the space of facial expressions. The space of expression is created from about 2400 frames of facial expressions. To represent the state of each expression, we use the distance matrix that represents the distances between pairs of feature points on the face. The set of distance matrices is used as the space of expressions. Facial expression of 3D avatar is controled in real time as the user navigates the space. To help this process, we visualized the space of expressions in 2D space by using the Principal Component Analysis(PCA) projection. To see how effective this system is, we had users control facial expressions of 3D avatar by using the system. This paper evaluates the results.

  • PDF