• 제목/요약/키워드: facial expression analysis

검색결과 164건 처리시간 0.019초

기준얼굴을 이용한 얼굴표정 분석 및 합성 (Analysis and Synthesis of Facial Expression using Base Faces)

  • 박문호;고희동;변혜란
    • 한국정보과학회논문지:소프트웨어및응용
    • /
    • 제27권8호
    • /
    • pp.827-833
    • /
    • 2000
  • 본 논문에서는 인간의 중요한 감정표현 수단인 얼굴표정을 분석하는 방법을 제시하였다. 제안된 방법에서는 기준얼굴과 그의 혼합 비율의 관점에서 얼굴표정을 분석한다. 기준얼굴은 인간의 대표적인 얼굴표정인 놀람, 두려움, 분노, 혐오, 행복, 슬픔, 그리고 무표정으로 설정하였다. 얼굴 모델의 생성을 위해 일반 얼굴 모델이 얼굴영상으로 정합하는 방법을 사용하였다. 기준얼굴의 혼합 비율을 구하기 위해 유전자 알고리즘과 Simulated Annealing 방법을 사용하였고 탐색된 얼굴표정 정보를 이용한 얼굴표정 생성 실험을 통해 제안된 방법의 유용성을 입증하였다.

  • PDF

로봇과 인간의 상호작용을 위한 얼굴 표정 인식 및 얼굴 표정 생성 기법 (Recognition and Generation of Facial Expression for Human-Robot Interaction)

  • 정성욱;김도윤;정명진;김도형
    • 제어로봇시스템학회논문지
    • /
    • 제12권3호
    • /
    • pp.255-263
    • /
    • 2006
  • In the last decade, face analysis, e.g. face detection, face recognition, facial expression recognition, is a very lively and expanding research field. As computer animated agents and robots bring a social dimension to human computer interaction, interest in this research field is increasing rapidly. In this paper, we introduce an artificial emotion mimic system which can recognize human facial expressions and also generate the recognized facial expression. In order to recognize human facial expression in real-time, we propose a facial expression classification method that is performed by weak classifiers obtained by using new rectangular feature types. In addition, we make the artificial facial expression using the developed robotic system based on biological observation. Finally, experimental results of facial expression recognition and generation are shown for the validity of our robotic system.

얼굴 특징 변화에 따른 휴먼 감성 인식 (Human Emotion Recognition based on Variance of Facial Features)

  • 이용환;김영섭
    • 반도체디스플레이기술학회지
    • /
    • 제16권4호
    • /
    • pp.79-85
    • /
    • 2017
  • Understanding of human emotion has a high importance in interaction between human and machine communications systems. The most expressive and valuable way to extract and recognize the human's emotion is by facial expression analysis. This paper presents and implements an automatic extraction and recognition scheme of facial expression and emotion through still image. This method has three main steps to recognize the facial emotion: (1) Detection of facial areas with skin-color method and feature maps, (2) Creation of the Bezier curve on eyemap and mouthmap, and (3) Classification and distinguish the emotion of characteristic with Hausdorff distance. To estimate the performance of the implemented system, we evaluate a success-ratio with emotional face image database, which is commonly used in the field of facial analysis. The experimental result shows average 76.1% of success to classify and distinguish the facial expression and emotion.

  • PDF

표정 분석 프레임워크 (Facial Expression Analysis Framework)

  • 지은미
    • 한국컴퓨터산업학회논문지
    • /
    • 제8권3호
    • /
    • pp.187-196
    • /
    • 2007
  • 사람들은 의식적이든 무의식적이든 표정을 통해 감정을 표현하며 살아간다. 이러한 표정을 인식하려는 시도가 몇 몇 심리학자에 의해 시작되어 과거 10여년동안 컴퓨터 과학자들에게도 관심분야가 되었다. 표정인식은 인간과 컴퓨터의 인터페이스를 기반으로 하는 여러 분야에 응용할 수있는 미래가치가 높은 분야이다. 그러나 많은 연구에도 불구하고 조명변화, 해상도, 고차원의 정보 처리 등의 어려움으로 실용화된 시스템을 찾아보기 힘들다. 본 논문에서는 표정 분석을 위한 기본 프레임워크를 기술하고 각 단계의 필요성과 국외의 연구동향을 기술하였으며 국내의 표정에 관한 연구사례를 분석하였다. 이를 통해 국내에서 표정분석에 기여하고자 하는 연구자들에게 도움이 되기를 기대한다.

  • PDF

Facial Expression Recognition Method Based on Residual Masking Reconstruction Network

  • Jianing Shen;Hongmei Li
    • Journal of Information Processing Systems
    • /
    • 제19권3호
    • /
    • pp.323-333
    • /
    • 2023
  • Facial expression recognition can aid in the development of fatigue driving detection, teaching quality evaluation, and other fields. In this study, a facial expression recognition method was proposed with a residual masking reconstruction network as its backbone to achieve more efficient expression recognition and classification. The residual layer was used to acquire and capture the information features of the input image, and the masking layer was used for the weight coefficients corresponding to different information features to achieve accurate and effective image analysis for images of different sizes. To further improve the performance of expression analysis, the loss function of the model is optimized from two aspects, feature dimension and data dimension, to enhance the accurate mapping relationship between facial features and emotional labels. The simulation results show that the ROC of the proposed method was maintained above 0.9995, which can accurately distinguish different expressions. The precision was 75.98%, indicating excellent performance of the facial expression recognition model.

표정과 언어 감성 분석을 통한 스트레스 측정시스템 설계 (A Design of Stress Measurement System using Facial and Verbal Sentiment Analysis)

  • 유수화;전지원;이애진;김윤희
    • KNOM Review
    • /
    • 제24권2호
    • /
    • pp.35-47
    • /
    • 2021
  • 끊임없는 경쟁과 발전을 요구하는 현대사회에는 다양한 스트레스가 존재하고 그 스트레스는 많은 경우 인물의 표정과 언어로 표현된다. 따라서 스트레스는 표정과 언어 분석을 통하여 측정할 수 있으며, 이를 효율적으로 관리하기 위한 시스템 개발이 필요하다. 본 연구에서는 표정과 언어 감성 분석을 통하여 스트레스를 측정할 수 있는 시스템을 제안한다. 인물의 표정과 언어 감성을 분석하여 주요 감성값 기준으로 스트레스 지수를 도출하고 표정과 언어의 일치성을 기준으로 통합 스트레스 지수를 도출하는 스트레스 측정 방법을 제안한다. 스트레스 측정기법을 통한 정량화, 일반화는 다수의 연구자가 객관적인 기준으로 스트레스 지수를 평가할 수 있도록 할 수 있다.

Emotion Recognition using Facial Thermal Images

  • Eom, Jin-Sup;Sohn, Jin-Hun
    • 대한인간공학회지
    • /
    • 제31권3호
    • /
    • pp.427-435
    • /
    • 2012
  • The aim of this study is to investigate facial temperature changes induced by facial expression and emotional state in order to recognize a persons emotion using facial thermal images. Background: Facial thermal images have two advantages compared to visual images. Firstly, facial temperature measured by thermal camera does not depend on skin color, darkness, and lighting condition. Secondly, facial thermal images are changed not only by facial expression but also emotional state. To our knowledge, there is no study to concurrently investigate these two sources of facial temperature changes. Method: 231 students participated in the experiment. Four kinds of stimuli inducing anger, fear, boredom, and neutral were presented to participants and the facial temperatures were measured by an infrared camera. Each stimulus consisted of baseline and emotion period. Baseline period lasted during 1min and emotion period 1~3min. In the data analysis, the temperature differences between the baseline and emotion state were analyzed. Eyes, mouth, and glabella were selected for facial expression features, and forehead, nose, cheeks were selected for emotional state features. Results: The temperatures of eyes, mouth, glanella, forehead, and nose area were significantly decreased during the emotional experience and the changes were significantly different by the kind of emotion. The result of linear discriminant analysis for emotion recognition showed that the correct classification percentage in four emotions was 62.7% when using both facial expression features and emotional state features. The accuracy was slightly but significantly decreased at 56.7% when using only facial expression features, and the accuracy was 40.2% when using only emotional state features. Conclusion: Facial expression features are essential in emotion recognition, but emotion state features are also important to classify the emotion. Application: The results of this study can be applied to human-computer interaction system in the work places or the automobiles.

PCA 표상을 이용한 강인한 얼굴 표정 인식 (Robust Facial Expression Recognition using PCA Representation)

  • 신영숙
    • 인지과학
    • /
    • 제16권4호
    • /
    • pp.323-331
    • /
    • 2005
  • 본 논문은 조명 변화에 강인하며 중립 표정과 같은 표정 측정의 기준이 되는 단서 없이 다양한 내적상태 안에서 얼굴표정을 인식할 수 있는 개선된 시스템을 제안한다. 표정정보를 추출하기 위한 전처리 작업으로, 백색화(whitening) 단계가 적용되었다. 백색화 단계는 영상데이터들의 평균값이 0이며 단위분산 값으로 균일한 분포를 갖도록 하여 조명 변화에 대한 민감도를 줄인다. 백색화 단계 수행 후 제 1 주성분이 제외된 나머지 주성분들로 이루어진 PCA표상을 표정정보로 사용함으로써 중립 표정에 대한 단서 없이 얼굴표정의 특징추출을 가능하게 한다. 본 실험 결과는 또한 83개의 내적상태와 일치되는 다양한 얼굴표정들에서 임의로 선택된 표정영상들을 내적상태의 차원모델에 기반한 얼굴표정 인식을 수행함으로써 다양하고 자연스런 얼굴 표정 인식을 가능하게 하였다.

  • PDF

Facial Expression Recognition using 1D Transform Features and Hidden Markov Model

  • Jalal, Ahmad;Kamal, Shaharyar;Kim, Daijin
    • Journal of Electrical Engineering and Technology
    • /
    • 제12권4호
    • /
    • pp.1657-1662
    • /
    • 2017
  • Facial expression recognition systems using video devices have emerged as an important component of natural human-machine interfaces which contribute to various practical applications such as security systems, behavioral science and clinical practices. In this work, we present a new method to analyze, represent and recognize human facial expressions using a sequence of facial images. Under our proposed facial expression recognition framework, the overall procedure includes: accurate face detection to remove background and noise effects from the raw image sequences and align each image using vertex mask generation. Furthermore, these features are reduced by principal component analysis. Finally, these augmented features are trained and tested using Hidden Markov Model (HMM). The experimental evaluation demonstrated the proposed approach over two public datasets such as Cohn-Kanade and AT&T datasets of facial expression videos that achieved expression recognition results as 96.75% and 96.92%. Besides, the recognition results show the superiority of the proposed approach over the state of the art methods.

소셜 로봇의 표정 커스터마이징 구현 및 분석 (The Implementation and Analysis of Facial Expression Customization for a Social Robot)

  • 이지연;박하은;;김병헌;이희승
    • 로봇학회논문지
    • /
    • 제18권2호
    • /
    • pp.203-215
    • /
    • 2023
  • Social robots, which are mainly used by individuals, emphasize the importance of human-robot relationships (HRR) more compared to other types of robots. Emotional expression in robots is one of the key factors that imbue HRR with value; emotions are mainly expressed through the face. However, because of cultural and preference differences, the desired robot facial expressions differ subtly depending on the user. It was expected that a robot facial expression customization tool may mitigate such difficulties and consequently improve HRR. To prove this, we created a robot facial expression customization tool and a prototype robot. We implemented a suitable emotion engine for generating robot facial expressions in a dynamic human-robot interaction setting. We conducted experiments and the users agreed that the availability of a customized version of the robot has a more positive effect on HRR than a predefined version of the robot. Moreover, we suggest recommendations for future improvements of the customization process of robot facial expression.