• 제목/요약/키워드: facial features

검색결과 642건 처리시간 0.021초

특징점 기반의 적응적 얼굴 움직임 분석을 통한 표정 인식 (Feature-Oriented Adaptive Motion Analysis For Recognizing Facial Expression)

  • 노성규;박한훈;신홍창;진윤종;박종일
    • 한국HCI학회:학술대회논문집
    • /
    • 한국HCI학회 2007년도 학술대회 1부
    • /
    • pp.667-674
    • /
    • 2007
  • Facial expressions provide significant clues about one's emotional state; however, it always has been a great challenge for machine to recognize facial expressions effectively and reliably. In this paper, we report a method of feature-based adaptive motion energy analysis for recognizing facial expression. Our method optimizes the information gain heuristics of ID3 tree and introduces new approaches on (1) facial feature representation, (2) facial feature extraction, and (3) facial feature classification. We use minimal reasonable facial features, suggested by the information gain heuristics of ID3 tree, to represent the geometric face model. For the feature extraction, our method proceeds as follows. Features are first detected and then carefully "selected." Feature "selection" is finding the features with high variability for differentiating features with high variability from the ones with low variability, to effectively estimate the feature's motion pattern. For each facial feature, motion analysis is performed adaptively. That is, each facial feature's motion pattern (from the neutral face to the expressed face) is estimated based on its variability. After the feature extraction is done, the facial expression is classified using the ID3 tree (which is built from the 1728 possible facial expressions) and the test images from the JAFFE database. The proposed method excels and overcomes the problems aroused by previous methods. First of all, it is simple but effective. Our method effectively and reliably estimates the expressive facial features by differentiating features with high variability from the ones with low variability. Second, it is fast by avoiding complicated or time-consuming computations. Rather, it exploits few selected expressive features' motion energy values (acquired from intensity-based threshold). Lastly, our method gives reliable recognition rates with overall recognition rate of 77%. The effectiveness of the proposed method will be demonstrated from the experimental results.

  • PDF

Emotion Recognition using Facial Thermal Images

  • Eom, Jin-Sup;Sohn, Jin-Hun
    • 대한인간공학회지
    • /
    • 제31권3호
    • /
    • pp.427-435
    • /
    • 2012
  • The aim of this study is to investigate facial temperature changes induced by facial expression and emotional state in order to recognize a persons emotion using facial thermal images. Background: Facial thermal images have two advantages compared to visual images. Firstly, facial temperature measured by thermal camera does not depend on skin color, darkness, and lighting condition. Secondly, facial thermal images are changed not only by facial expression but also emotional state. To our knowledge, there is no study to concurrently investigate these two sources of facial temperature changes. Method: 231 students participated in the experiment. Four kinds of stimuli inducing anger, fear, boredom, and neutral were presented to participants and the facial temperatures were measured by an infrared camera. Each stimulus consisted of baseline and emotion period. Baseline period lasted during 1min and emotion period 1~3min. In the data analysis, the temperature differences between the baseline and emotion state were analyzed. Eyes, mouth, and glabella were selected for facial expression features, and forehead, nose, cheeks were selected for emotional state features. Results: The temperatures of eyes, mouth, glanella, forehead, and nose area were significantly decreased during the emotional experience and the changes were significantly different by the kind of emotion. The result of linear discriminant analysis for emotion recognition showed that the correct classification percentage in four emotions was 62.7% when using both facial expression features and emotional state features. The accuracy was slightly but significantly decreased at 56.7% when using only facial expression features, and the accuracy was 40.2% when using only emotional state features. Conclusion: Facial expression features are essential in emotion recognition, but emotion state features are also important to classify the emotion. Application: The results of this study can be applied to human-computer interaction system in the work places or the automobiles.

Region-Based Facial Expression Recognition in Still Images

  • Nagi, Gawed M.;Rahmat, Rahmita O.K.;Khalid, Fatimah;Taufik, Muhamad
    • Journal of Information Processing Systems
    • /
    • 제9권1호
    • /
    • pp.173-188
    • /
    • 2013
  • In Facial Expression Recognition Systems (FERS), only particular regions of the face are utilized for discrimination. The areas of the eyes, eyebrows, nose, and mouth are the most important features in any FERS. Applying facial features descriptors such as the local binary pattern (LBP) on such areas results in an effective and efficient FERS. In this paper, we propose an automatic facial expression recognition system. Unlike other systems, it detects and extracts the informative and discriminant regions of the face (i.e., eyes, nose, and mouth areas) using Haar-feature based cascade classifiers and these region-based features are stored into separate image files as a preprocessing step. Then, LBP is applied to these image files for facial texture representation and a feature-vector per subject is obtained by concatenating the resulting LBP histograms of the decomposed region-based features. The one-vs.-rest SVM, which is a popular multi-classification method, is employed with the Radial Basis Function (RBF) for facial expression classification. Experimental results show that this approach yields good performance for both frontal and near-frontal facial images in terms of accuracy and time complexity. Cohn-Kanade and JAFFE, which are benchmark facial expression datasets, are used to evaluate this approach.

인간-로봇 상호작용을 위한 자세가 변하는 사용자 얼굴검출 및 얼굴요소 위치추정 (Face and Facial Feature Detection under Pose Variation of User Face for Human-Robot Interaction)

  • 박성기;박민용;이태근
    • 제어로봇시스템학회논문지
    • /
    • 제11권1호
    • /
    • pp.50-57
    • /
    • 2005
  • We present a simple and effective method of face and facial feature detection under pose variation of user face in complex background for the human-robot interaction. Our approach is a flexible method that can be performed in both color and gray facial image and is also feasible for detecting facial features in quasi real-time. Based on the characteristics of the intensity of neighborhood area of facial features, new directional template for facial feature is defined. From applying this template to input facial image, novel edge-like blob map (EBM) with multiple intensity strengths is constructed. Regardless of color information of input image, using this map and conditions for facial characteristics, we show that the locations of face and its features - i.e., two eyes and a mouth-can be successfully estimated. Without the information of facial area boundary, final candidate face region is determined by both obtained locations of facial features and weighted correlation values with standard facial templates. Experimental results from many color images and well-known gray level face database images authorize the usefulness of proposed algorithm.

색 정보와 기하학적 위치관계를 이용한 얼굴 특징점 검출 (Detection of Facial Features Using Color and Facial Geometry)

  • 정상현;문인혁
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2002년도 하계종합학술대회 논문집(4)
    • /
    • pp.57-60
    • /
    • 2002
  • Facial features are often used for human computer interface(HCI). This paper proposes a method to detect facial features using color and facial geometry information. Face region is first extracted by using color information, and then the pupils are detected by applying a separability filter and facial geometry constraints. Mouth is also extracted from Cr(coded red) component. Experimental results shows that the proposed detection method is robust to a wide range of facial variation in position, scale, color and gaze.

  • PDF

체질진단에 활용되는 안면 특징 변수들의 반복성에 대한 예비 연구 (A Preliminary Study on the Repeatability of Facial Feature Variables Used in the Sasang Constitutional Diagnosis)

  • 노민영;김종열;도준형
    • 사상체질의학회지
    • /
    • 제29권1호
    • /
    • pp.29-39
    • /
    • 2017
  • Objectives Facial features can be utilized as an indicator of Korean medical diagnosis. They are often measured by using the diagnostic device for an objective diagnosis. Accordingly, it is necessary to verify the reliability of the features which are obtained from the device for the accurate diagnosis. In this study, we attempt to evaluate the repeatability of facial feature variables using the Sasang Constitutional Analysis Tool(SCAT) for the Sasang Constitutional face diagnosis. Methods Facial pictures of two subjects were taken 24 times respectively for two days according to a standard guideline. In order to evaluate the repeatability, the coefficient of variation was calculated for the facial features extracted from frontal and profile images. Results The coefficient of variation was less than 10% in most of the facial features except the upper lip, trichion, and chins related features. Conclusions It was confirmed that the coefficient of variation was small in most of the features which enables the objective and reliable analysis of face. However, some features showed the low reliability because the location of facial landmarks related to them is ambiguous. In order to solve the problem, a clear basis for the location discussion is required.

얼굴 인상과 물리적 특징의 관계 구조 분석 (The analysis of relationships between facial impressions and physical features)

  • 김효선;한재현
    • 인지과학
    • /
    • 제14권4호
    • /
    • pp.53-63
    • /
    • 2003
  • 얼굴 인상과 얼굴의 물리적 특징 사이의 관계를 분석하고 인상이 얼굴의 유사성 판단에 미치는 영향을 조사하였다. 얼굴 데이터베이스로부터 선정한 79개의 얼굴에 대해 '순하다-사납다', '영리하다-우둔하다', '여성스럽다-남자답다', '앳되다-성숙하다'의 네 개 차원에 대한 인상 평정값과 41개의 물리적 특징의 측정값을 수집하였다. 두 가지 값을 대상으로 한 중다 회귀 분석 결과, 얼굴의 물리적 구조가 인상과 밀접한 관계가 있는 것으로 나타났다. 얼굴의 유사성 판단 실험을 통해서 인상이 얼굴 정보 처리 과정에서의 사용 가능성을 확인하였다. 실험 결과, 사람들은 물리적 특징 조건이 비슷할 때 중성 인상의 얼굴보다 동일한 인상의 얼굴들을 더 유사하게 지각하는 것으로 나타났다. 이러한 결과들은 인상이 얼굴 생김새를 표상하는 심리적인 구조로 사용되며 인상 정보가 얼굴 처리 과정에 포함될 가능성이 있음을 시사한다.

  • PDF

실시간 얼굴 방향성 추정을 위한 효율적인 얼굴 특성 검출과 추적의 결합방법 (A Hybrid Approach of Efficient Facial Feature Detection and Tracking for Real-time Face Direction Estimation)

  • 김웅기;전준철
    • 인터넷정보학회논문지
    • /
    • 제14권6호
    • /
    • pp.117-124
    • /
    • 2013
  • 본 논문에서는 실시간으로 입력되는 비디오 영상으로부터 사용자의 얼굴 방향을 효율적으로 추정하는 새로운 방법을 제안하였다. 이를 위하여 입력 영상으로부터 외부조명의 변화에 덜 민감한 Haar-like 특성을 이용하여 얼굴영역의 검출을 수행하고 검출 된 얼굴영역 내에서 양쪽 눈, 코, 입 등의 주요 특성을 검출한다. 이 후 실시간으로 매 프레임마다 광류를 이용해 검출된 특징 점을 추적하게 되며, 추적된 특징 점을 이용해 얼굴의 방향성 추정한다. 일반적으로 광류를 이용한 특징 추적에서 발생할 수 있는 특징점의 좌표가 유실되어 잘못된 특징점을 추적하게 되는 상황을 방지하기 위하여 검출된 특징점의 템플릿 매칭(template matching)을 사용해 추적중인 특징점의 유효성을 실시간 판단하고, 그 결과에 따라 얼굴 특징 점들을 다시 검출하거나, 추적을 지속하여 얼굴의 방향성을 추정을 가능하게 한다. 탬플릿 매칭은 특징검출 단계에서 추출된 좌우 눈, 코끝 그리고 입의 위치 등 4가지 정보를 저장한 후 얼굴포즈 측정에 있어 광류에의해 추적중인 해당 특징점들 간의 유사도를 비교하여 유사도가 임계치를 벗어 날 경우 새로이 특징점을 찾아내는 작업을 수행하여 정보를 갱신한다. 제안된 방법을 통해 얼굴의 특성 추출을 위한 특성 검출과정과 검출된 특징을 지속적으로 보완하는 추적과정을 자동적으로 상호 결합하여 안정적으로 실시간에 얼굴 방향성 추정 할 수 있었다. 실험을 통하여 제안된 방법이 효과적으로 얼굴의 포즈를 측정할 수 있음을 입증하였다.

아바타 생성을 위한 이목구비 모양 특징정보 추출 및 분류에 관한 연구 (A Study on Facial Feature' Morphological Information Extraction and Classification for Avatar Generation)

  • 박연출
    • 한국컴퓨터산업학회논문지
    • /
    • 제4권10호
    • /
    • pp.631-642
    • /
    • 2003
  • 본 논문에서는 웹상에서 자신을 대신하는 아바타 제작시 본인의 얼굴과 닮은 얼굴을 생성하기 위해 사진으로부터 개인의 특징정보를 추출하는 방법과 추출된 특징정보에 따라 해당하는 이목구비를 준비된 분류기준에 의해 특정 클래스로 분류해 내는 방법을 제안한다. 특징정보 추출은 눈, 코, 입, 턱선으로 나누어 진행되어졌으며, 각 이목구비의 특징점과 분류기준을 각각 제시하였다. 추출 된 특징정보들은 전문 디자이너에 의해 그려진 이목구비 이미지들과 유사도를 계산하는데 사용되었으며, 여기서 가장 유사한 이미지를 턱선 벡터이미지에 합성하여 아바타 얼굴을 얻어낼 수 있었다.

  • PDF

실시간 얼굴 표정 인식을 위한 새로운 사각 특징 형태 선택기법 (New Rectangle Feature Type Selection for Real-time Facial Expression Recognition)

  • 김도형;안광호;정명진;정성욱
    • 제어로봇시스템학회논문지
    • /
    • 제12권2호
    • /
    • pp.130-137
    • /
    • 2006
  • In this paper, we propose a method of selecting new types of rectangle features that are suitable for facial expression recognition. The basic concept in this paper is similar to Viola's approach, which is used for face detection. Instead of previous Haar-like features we choose rectangle features for facial expression recognition among all possible rectangle types in a 3${\times}$3 matrix form using the AdaBoost algorithm. The facial expression recognition system constituted with the proposed rectangle features is also compared to that with previous rectangle features with regard to its capacity. The simulation and experimental results show that the proposed approach has better performance in facial expression recognition.