• 제목/요약/키워드: Facial analysis

검색결과 1,114건 처리시간 0.025초

얼굴특징자 정보를 이용한 인터넷 기반 얼굴관상 해석 및 얼굴아바타 자동생성시스템 (Facial Phrenology Analysis and Automatic Face Avatar Drawing System Based on Internet Using Facial Feature Information)

  • 이응주
    • 한국멀티미디어학회논문지
    • /
    • 제9권8호
    • /
    • pp.982-999
    • /
    • 2006
  • 본 논문에서는 복합 칼라정보와 얼굴의 기하학적 정보를 이용한 인터넷 기반 얼굴관상해석 및 자동 얼굴 컨텐츠 생성시스템을 제안하였다. 제안한 시스템은 YCbCr과 YIQ 칼라모델의 Cr과 I 성분의 논리곱 연산처리로 얼굴영역을 검출하였다. 검출한 얼굴영역에서 얼굴의 기하학적 정보로부터 얼굴 특징자를 추출 하였으며 각 특징자들을 세부 분류하여 얼굴 관상을 해석하도록 하였다. 또한 제안한 시스템은 추출과 분류된 특징자로부터 개인의 얼굴에 가장 적합한 얼굴 아바타 컨텐츠를 자동 생성할 수 있게 하였다. 실험결과 제안한 방법은 기존의 얼굴인식 방법에 비해 실시간 얼굴검출과 인식은 물론 정량적인 얼굴관상해석과 자동 얼굴 아바타 생성이 가능하였다.

  • PDF

Emotion Recognition using Facial Thermal Images

  • Eom, Jin-Sup;Sohn, Jin-Hun
    • 대한인간공학회지
    • /
    • 제31권3호
    • /
    • pp.427-435
    • /
    • 2012
  • The aim of this study is to investigate facial temperature changes induced by facial expression and emotional state in order to recognize a persons emotion using facial thermal images. Background: Facial thermal images have two advantages compared to visual images. Firstly, facial temperature measured by thermal camera does not depend on skin color, darkness, and lighting condition. Secondly, facial thermal images are changed not only by facial expression but also emotional state. To our knowledge, there is no study to concurrently investigate these two sources of facial temperature changes. Method: 231 students participated in the experiment. Four kinds of stimuli inducing anger, fear, boredom, and neutral were presented to participants and the facial temperatures were measured by an infrared camera. Each stimulus consisted of baseline and emotion period. Baseline period lasted during 1min and emotion period 1~3min. In the data analysis, the temperature differences between the baseline and emotion state were analyzed. Eyes, mouth, and glabella were selected for facial expression features, and forehead, nose, cheeks were selected for emotional state features. Results: The temperatures of eyes, mouth, glanella, forehead, and nose area were significantly decreased during the emotional experience and the changes were significantly different by the kind of emotion. The result of linear discriminant analysis for emotion recognition showed that the correct classification percentage in four emotions was 62.7% when using both facial expression features and emotional state features. The accuracy was slightly but significantly decreased at 56.7% when using only facial expression features, and the accuracy was 40.2% when using only emotional state features. Conclusion: Facial expression features are essential in emotion recognition, but emotion state features are also important to classify the emotion. Application: The results of this study can be applied to human-computer interaction system in the work places or the automobiles.

PCA을 이용한 얼굴 표정의 감정 인식 방법 (Emotion Recognition Method of Facial Image using PCA)

  • 김호덕;양현창;박창현;심귀보
    • 한국지능시스템학회논문지
    • /
    • 제16권6호
    • /
    • pp.772-776
    • /
    • 2006
  • 얼굴 표정인식에 관한 연구는 대부분 얼굴의 정면 화상을 가지고 연구를 한다. 얼굴 표정인식에 큰 영향을 미치는 대표적인 부위는 눈과 입이다. 그래서 표정 인식 연구자들은 눈, 눈썹, 입을 중심으로 표정 인식이나 표현 연구를 해왔다. 그러나 일상생활에서 카메라 앞에서는 대부분의 사람들은 눈동자의 빠른 변화의 인지가 어렵다. 또한 많은 사람들이 안경을 쓰고 있다. 그래서 본 연구에서는 눈이 가려진 경우의 표정 인식을 Principal Component Analysis (PCA)를 이용하여 시도하였다.

Ultrasonography for Facial Nerve Palsy: A Systematic Review and Meta-Analysis Protocol

  • Seojung Ha;Bo-In Kwon;Joo-Hee Kim
    • Journal of Acupuncture Research
    • /
    • 제41권1호
    • /
    • pp.63-68
    • /
    • 2024
  • Background: Facial nerve palsy presents a significant healthcare challenge, impacting daily life and social interactions. This systematic review investigates the potential utility of ultrasonography as a diagnostic tool for facial nerve palsy. Methods: Electronic searches will be conducted across various databases, including MEDLINE, EMBASE, CENTRAL (Cochrane Central register of Controlled Trials), CNKI (China National Knowledge Infrastructure), KMBASE (Korean Medical Database), ScienceON, and OASIS (Oriental Medicine Advanced Searching Integrated System), up to February 2024. The primary outcome will focus on ultrasonography-related parameters, such as facial nerve diameter and muscle thickness. Secondary outcomes will encompass clinical measurements, including facial nerve grading scales and electrodiagnostic studies. the risk of bias in individual study will be assessed using the Cochrane Risk of Bias assessment tool, while the grading of recommendations, assessment, development, and evaluations methodology will be utilized to evaluate the overall quality of evidence. Conclusion: This study aims to review existing evidence and evaluate the diagnostic and prognostic value of ultrasonography for peripheral facial nerve palsy.

지각된 얼굴 매력성과 의복 적절성이 호감도, 특질 판단을 매개하여 과제 수행능력 판단에 미치는 영향 (The Effects of Perceived Facial Attractiveness and Appropriateness of Clothing on the Task Performance Evaluation mediated by Likability and the Trait Evaluation)

  • 정명선;김재숙
    • 복식
    • /
    • 제51권8호
    • /
    • pp.77-91
    • /
    • 2001
  • The purpose of this study was to investigate the effects of the perceived facial attractiveness and appropriateness of clothing on the evaluation of task performance of target person mediated by subjects'likability toward and trait evaluation of the target person. The facial attractiveness of the female university students were used as index of physical attractiveness in this study. Three levels of facial attractiveness was manipulated based on the judgements by 30 female university students. Four types of clothes were selected perceived appropriate for two assumed situations by female university students. Three female faces having high. medium, and low attractiveness were simulated with the same body dressed four types of clothing respectively using CAD system, and a total of 12 stimulus persons were created. The design for the experiment was a $3\tiems4\times2$ randomaized factorial. with three levels of facial attractiveness(high, medium, low), and four types attire(formal-masculine, formal-feminine, casual-masculine, casual-feminine), two kinds of context (job interview, dating) in which perceptions were occurred. The subjects of this study was 524 male and female(262 of male, 262 of female) university students from 3 universities in Kwangju, Korea. The data were analysed using factor analysis. descriptive statistics, regression, path analysis. The results were as follows : 1. In bogus job interview. the direct effect of perceived facial attractiveness on task performance evaluation was .175 and the indirect effect mediated by likability and trait evaluation was .285 in path analysis model. The direct effect of perceived appropriateness of clothing on task performance evaluation was .111 and the indirect effect mediated by likability only was .0564 in pass analysis model. 2. In dating situation, the direct effect of perceived facial attractiveness on task performance evaluation was .355, the indirect effect mediated by likability and trait evaluation was .188 in path analysis model. The direct effect of perceived appropriateness of clothing on task performance evaluation was .108, the indirect effect mediated by likability and trait evaluation was .060 in Pass analysis.

  • PDF

A Local Feature-Based Robust Approach for Facial Expression Recognition from Depth Video

  • Uddin, Md. Zia;Kim, Jaehyoun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제10권3호
    • /
    • pp.1390-1403
    • /
    • 2016
  • Facial expression recognition (FER) plays a very significant role in computer vision, pattern recognition, and image processing applications such as human computer interaction as it provides sufficient information about emotions of people. For video-based facial expression recognition, depth cameras can be better candidates over RGB cameras as a person's face cannot be easily recognized from distance-based depth videos hence depth cameras also resolve some privacy issues that can arise using RGB faces. A good FER system is very much reliant on the extraction of robust features as well as recognition engine. In this work, an efficient novel approach is proposed to recognize some facial expressions from time-sequential depth videos. First of all, efficient Local Binary Pattern (LBP) features are obtained from the time-sequential depth faces that are further classified by Generalized Discriminant Analysis (GDA) to make the features more robust and finally, the LBP-GDA features are fed into Hidden Markov Models (HMMs) to train and recognize different facial expressions successfully. The depth information-based proposed facial expression recognition approach is compared to the conventional approaches such as Principal Component Analysis (PCA), Independent Component Analysis (ICA), and Linear Discriminant Analysis (LDA) where the proposed one outperforms others by obtaining better recognition rates.

안면비대칭 3차원 CT 분석 (Three dimensional CT analysis of facial asymmetry)

  • 윤숙자;임회정;강병철;황현식
    • Imaging Science in Dentistry
    • /
    • 제37권1호
    • /
    • pp.45-51
    • /
    • 2007
  • Purpose : This study aimed to identify the range of normal facial asymmetry using three-dimensional CT and to develop a simple method of diagnosis of facial asymmetry. Materials and Methods : Twenty eight adults with normal occlusion (16 males and 12 females; mean age 24 years and 1 month) were selected whose faces were assessed to be symmetric by an orthodontist. Three-dimensional reconstructions were obtained utilizing spiral CT scans and an oral and maxillofacial radiologist evaluated nineteen anatomic landmarks in three-dimensional coordinates. Facial asymmetry index of each landmark was caluculated. Results : The range of normal facial asymmetry of each landmark was identified using mean and standard deviation of facial asymmetry index. Conclusions : The range of normal facial asymmetry identified in this study may be used as a diagnostic standard for facial asymmetry analysis.

  • PDF

삼차원 전산화 단층촬영술을 이용한 안모 비대칭환자의 골격 분석 (SKELETAL PATTERN ANALYSIS OF FACIAL ASYMMETRY PATIENT USING THREE DIMENSIONAL COMPUTED TOMOGRAPHY)

  • 최정구;민승기;오승환;권경환;최문기;이준;오세리;유대현
    • Journal of the Korean Association of Oral and Maxillofacial Surgeons
    • /
    • 제34권6호
    • /
    • pp.622-627
    • /
    • 2008
  • In orthognathic surgery, precise analysis and diagnosis are essential for successful results. In facial asymmetric patient, traditional 2D image analysis has been used by lateral and P-A Cephalometric view, Skull PA, Panorama, Submentovertex view etc. But clinicians sometimes misdiagnose because they cannot find exact landmark due to superimposition, moreover image can be magnified and distorted by projection technique or patient's skull position, when using these analysis and method. For overcome these defects, analysis by using of 3D CT has been introduced. In this way we can analysis precisely by getting the exact image free of artifact and finding exact landmark with no interruption of superimposition. So we want to review of relationship between various skeletal landmarks of mandible or cranial base and facial asymmetry by predictable analysis using 3D CT. We select the cases of the patients who visited our department for correction of facial asymmetry during 2003-2007 and who were taken image of 3D CT for diagnosis. 3D CT images were reconstructed to 3D image by using V-Work program (Cybermed Inc., Seoul, Korea). And we analysis the relationship between facial asymmetry and various affecting factor of skeletal pattern. The mandibular ramus hight difference between right and left was most affecting factor that express facial asymmetry. And in this research, there was no relationship between cranial base and facial asymmetry. The angulation between facial midline and mandibular ramus divergency has significant relationship with facial asymmetry

표정 분석 프레임워크 (Facial Expression Analysis Framework)

  • 지은미
    • 한국컴퓨터산업학회논문지
    • /
    • 제8권3호
    • /
    • pp.187-196
    • /
    • 2007
  • 사람들은 의식적이든 무의식적이든 표정을 통해 감정을 표현하며 살아간다. 이러한 표정을 인식하려는 시도가 몇 몇 심리학자에 의해 시작되어 과거 10여년동안 컴퓨터 과학자들에게도 관심분야가 되었다. 표정인식은 인간과 컴퓨터의 인터페이스를 기반으로 하는 여러 분야에 응용할 수있는 미래가치가 높은 분야이다. 그러나 많은 연구에도 불구하고 조명변화, 해상도, 고차원의 정보 처리 등의 어려움으로 실용화된 시스템을 찾아보기 힘들다. 본 논문에서는 표정 분석을 위한 기본 프레임워크를 기술하고 각 단계의 필요성과 국외의 연구동향을 기술하였으며 국내의 표정에 관한 연구사례를 분석하였다. 이를 통해 국내에서 표정분석에 기여하고자 하는 연구자들에게 도움이 되기를 기대한다.

  • PDF

수정메이크업을 위한 성인 여성의 얼굴 유형 분석 (Facial Type Analysis of Adult Women for Correct Make-up)

  • 이경화;김정희
    • 한국의류학회지
    • /
    • 제31권11호
    • /
    • pp.1487-1499
    • /
    • 2007
  • In this study, photographs of 600 Korean females aged from 20 to 50years old were indirectly measured in Venus face 2D program. The measurements were analyzed by statistical methods. The purpose of this study was to differentiate the facial types of adult women for the beauty industry. As a result of factor analysis, 6 factors were selected the key factors of facial shape: head height(factor 1), head width(factor 2), side face width(factor 3), head width and circumference(factor 4), face length(factor 5), and side face width(factor 6). We categorized facial type into 5 groups with the previous 6 factor. 5 types were most common facial shapes: Oblong face(type 1), Square face(type 2), Oval face(type 3), Round face(type 4), Triangle face(type 5). The results of facial type analysis were showed that Round face(26.6%), Triangle face(25.3%), Oval face(22.3%), Square face(20.0%), Oblong face(5.7%).