• 제목/요약/키워드: Emotion Extraction

검색결과 114건 처리시간 0.035초

Hybrid-Feature Extraction for the Facial Emotion Recognition

  • Byun, Kwang-Sub;Park, Chang-Hyun;Sim, Kwee-Bo;Jeong, In-Cheol;Ham, Ho-Sang
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2004년도 ICCAS
    • /
    • pp.1281-1285
    • /
    • 2004
  • There are numerous emotions in the human world. Human expresses and recognizes their emotion using various channels. The example is an eye, nose and mouse. Particularly, in the emotion recognition from facial expression they can perform the very flexible and robust emotion recognition because of utilization of various channels. Hybrid-feature extraction algorithm is based on this human process. It uses the geometrical feature extraction and the color distributed histogram. And then, through the independently parallel learning of the neural-network, input emotion is classified. Also, for the natural classification of the emotion, advancing two-dimensional emotion space is introduced and used in this paper. Advancing twodimensional emotion space performs a flexible and smooth classification of emotion.

  • PDF

혼합형 특징점 추출을 이용한 얼굴 표정의 감성 인식 (Emotion Recognition of Facial Expression using the Hybrid Feature Extraction)

  • 변광섭;박창현;심귀보
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2004년도 심포지엄 논문집 정보 및 제어부문
    • /
    • pp.132-134
    • /
    • 2004
  • Emotion recognition between human and human is done compositely using various features that are face, voice, gesture and etc. Among them, it is a face that emotion expression is revealed the most definitely. Human expresses and recognizes a emotion using complex and various features of the face. This paper proposes hybrid feature extraction for emotions recognition from facial expression. Hybrid feature extraction imitates emotion recognition system of human by combination of geometrical feature based extraction and color distributed histogram. That is, it can robustly perform emotion recognition by extracting many features of facial expression.

  • PDF

Emotion Detection Algorithm Using Frontal Face Image

  • Kim, Moon-Hwan;Joo, Young-Hoon;Park, Jin-Bae
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2005년도 ICCAS
    • /
    • pp.2373-2378
    • /
    • 2005
  • An emotion detection algorithm using frontal facial image is presented in this paper. The algorithm is composed of three main stages: image processing stage and facial feature extraction stage, and emotion detection stage. In image processing stage, the face region and facial component is extracted by using fuzzy color filter, virtual face model, and histogram analysis method. The features for emotion detection are extracted from facial component in facial feature extraction stage. In emotion detection stage, the fuzzy classifier is adopted to recognize emotion from extracted features. It is shown by experiment results that the proposed algorithm can detect emotion well.

  • PDF

Emotion recognition from speech using Gammatone auditory filterbank

  • 레바부이;이영구;이승룡
    • 한국정보과학회:학술대회논문집
    • /
    • 한국정보과학회 2011년도 한국컴퓨터종합학술대회논문집 Vol.38 No.1(A)
    • /
    • pp.255-258
    • /
    • 2011
  • An application of Gammatone auditory filterbank for emotion recognition from speech is described in this paper. Gammatone filterbank is a bank of Gammatone filters which are used as a preprocessing stage before applying feature extraction methods to get the most relevant features for emotion recognition from speech. In the feature extraction step, the energy value of output signal of each filter is computed and combined with other of all filters to produce a feature vector for the learning step. A feature vector is estimated in a short time period of input speech signal to take the advantage of dependence on time domain. Finally, in the learning step, Hidden Markov Model (HMM) is used to create a model for each emotion class and recognize a particular input emotional speech. In the experiment, feature extraction based on Gammatone filterbank (GTF) shows the better outcomes in comparison with features based on Mel-Frequency Cepstral Coefficient (MFCC) which is a well-known feature extraction for speech recognition as well as emotion recognition from speech.

정서재활 바이오피드백을 위한 얼굴 영상 기반 정서인식 연구 (Study of Emotion Recognition based on Facial Image for Emotional Rehabilitation Biofeedback)

  • 고광은;심귀보
    • 제어로봇시스템학회논문지
    • /
    • 제16권10호
    • /
    • pp.957-962
    • /
    • 2010
  • If we want to recognize the human's emotion via the facial image, first of all, we need to extract the emotional features from the facial image by using a feature extraction algorithm. And we need to classify the emotional status by using pattern classification method. The AAM (Active Appearance Model) is a well-known method that can represent a non-rigid object, such as face, facial expression. The Bayesian Network is a probability based classifier that can represent the probabilistic relationships between a set of facial features. In this paper, our approach to facial feature extraction lies in the proposed feature extraction method based on combining AAM with FACS (Facial Action Coding System) for automatically modeling and extracting the facial emotional features. To recognize the facial emotion, we use the DBNs (Dynamic Bayesian Networks) for modeling and understanding the temporal phases of facial expressions in image sequences. The result of emotion recognition can be used to rehabilitate based on biofeedback for emotional disabled.

소리 주파수대역 기반 멀티미디어 콘텐츠의 감성 추출 (Emotion Extraction of Multimedia Contents based on Specific Sound Frequency Bands)

  • 권영훈;장재건
    • 디지털융복합연구
    • /
    • 제11권11호
    • /
    • pp.381-387
    • /
    • 2013
  • 최근 인간의 감성에 반응하고, 감성을 유도하는 감성콘텐츠가 문화산업 분야에서 크게 주목을 받으면서 멀티미디어 콘텐츠가 유발하는 감성 추출에 초점이 모아지고 있다. 게다가 최근 멀티미디어 콘텐츠가 빠르고 방대하게 생산, 유통되는 흐름으로 볼 때 콘텐츠에서 유발하는 감성을 자동으로 추출하는 기법의 연구들이 주목받고 있다. 본 논문은 멀티미디어 콘텐츠의 소리 정보 중 특정 주파수대역의 볼륨 값을 활용하여 멀티미디어 콘텐츠 내의 감성지수를 추출하는 방법에 대해 연구하고자 한다. 이러한 연구는 동영상 콘텐츠의 감성지수를 자동으로 추출할 수 있도록 하며 추출된 정보를 활용하여 사용자의 현재 감성, 혹은 날씨 등과 같은 기타 요소에 맞추어 사용자에게 맞춤형 콘텐츠를 제공하는데 사용되어질 것이다.

음성신호기반의 감정인식의 특징 벡터 비교 (A Comparison of Effective Feature Vectors for Speech Emotion Recognition)

  • 신보라;이석필
    • 전기학회논문지
    • /
    • 제67권10호
    • /
    • pp.1364-1369
    • /
    • 2018
  • Speech emotion recognition, which aims to classify speaker's emotional states through speech signals, is one of the essential tasks for making Human-machine interaction (HMI) more natural and realistic. Voice expressions are one of the main information channels in interpersonal communication. However, existing speech emotion recognition technology has not achieved satisfactory performances, probably because of the lack of effective emotion-related features. This paper provides a survey on various features used for speech emotional recognition and discusses which features or which combinations of the features are valuable and meaningful for the emotional recognition classification. The main aim of this paper is to discuss and compare various approaches used for feature extraction and to propose a basis for extracting useful features in order to improve SER performance.

한국어 감정표현단어의 추출과 범주화 (Korean Emotion Vocabulary: Extraction and Categorization of Feeling Words)

  • 손선주;박미숙;박지은;손진훈
    • 감성과학
    • /
    • 제15권1호
    • /
    • pp.105-120
    • /
    • 2012
  • 본 연구 1에서는 한국어 감정표현단어의 목록을 제작하고, 연구 2에서는 제작된 감정표현단어가 어떤 범주의 감정에 속하는지를 조사하였다. 연구 1의 한국어 감정표현단어 목록 제작을 위하여 연세대학교에서 제작한 '현대 한국어의 어휘빈도' 자료집으로부터 감정단어들을 추출하는 작업을 여러 단계에 걸쳐 시행하였다. 일상생활에서 빈도 높게 사용하는 감정표현단어를 선정하기 위하여 국문학 전공자와 감정연구자 12명이 참가하였으며, 총 504개의 감정표현단어들로 구성된 목록을 완성하였다. 연구 2에서는 80명의 대학생을 대상으로 각 단어가 '기쁨', '공포', '분노' 등 10개 범주(중성포함)의 감정 중 어느 감정과 관련 있는지 복수 선택하도록 하여 각 단어에 대한 감정 범주를 조사하였다. 단어들의 감정 범주 분석 결과, 504개 단어 중 426개 단어는 한 범주의 감정을 의미하였는데, '슬픔'을 나타내는 단어가 가장 많았으며, 다음으로 '분노', '기쁨' 순으로 나타났다. 다음 72개 단어는 두 감정 범주를 나타내었는데, '분노'와 '혐오', '슬픔'과 '공포' 그리고 '기쁨'과 '흥미'로 묶이는 단어가 많았다. 세 감정 범주를 보인 6개의 단어는 '놀람', '흥미', '기쁨'의 조합이 가장 높은 빈도로 나타났다. 본 연구는 일상생활에서 실제로 사용하는 감정표현단어 목록을 제작하고, 이에 기반을 두어 각 단어와 관련된 감정 범주를 복수의 감정 범주를 포함하여 규명하였다는데 의의가 있다. 본 연구에서 개발된 감정표현단어들과 각 단어에 대한 감정 범주 정보는 심리학 분야뿐만 아니라 이후 HCI 분야에서 언어적 내용에 기반을 둔 감정인식 연구에 활용될 수 있을 것으로 기대한다.

  • PDF

A Survey on Image Emotion Recognition

  • Zhao, Guangzhe;Yang, Hanting;Tu, Bing;Zhang, Lei
    • Journal of Information Processing Systems
    • /
    • 제17권6호
    • /
    • pp.1138-1156
    • /
    • 2021
  • Emotional semantics are the highest level of semantics that can be extracted from an image. Constructing a system that can automatically recognize the emotional semantics from images will be significant for marketing, smart healthcare, and deep human-computer interaction. To understand the direction of image emotion recognition as well as the general research methods, we summarize the current development trends and shed light on potential future research. The primary contributions of this paper are as follows. We investigate the color, texture, shape and contour features used for emotional semantics extraction. We establish two models that map images into emotional space and introduce in detail the various processes in the image emotional semantic recognition framework. We also discuss important datasets and useful applications in the field such as garment image and image retrieval. We conclude with a brief discussion about future research trends.

안정적인 실시간 얼굴 특징점 추적과 감정인식 응용 (Robust Real-time Tracking of Facial Features with Application to Emotion Recognition)

  • 안병태;김응희;손진훈;권인소
    • 로봇학회논문지
    • /
    • 제8권4호
    • /
    • pp.266-272
    • /
    • 2013
  • Facial feature extraction and tracking are essential steps in human-robot-interaction (HRI) field such as face recognition, gaze estimation, and emotion recognition. Active shape model (ASM) is one of the successful generative models that extract the facial features. However, applying only ASM is not adequate for modeling a face in actual applications, because positions of facial features are unstably extracted due to limitation of the number of iterations in the ASM fitting algorithm. The unaccurate positions of facial features decrease the performance of the emotion recognition. In this paper, we propose real-time facial feature extraction and tracking framework using ASM and LK optical flow for emotion recognition. LK optical flow is desirable to estimate time-varying geometric parameters in sequential face images. In addition, we introduce a straightforward method to avoid tracking failure caused by partial occlusions that can be a serious problem for tracking based algorithm. Emotion recognition experiments with k-NN and SVM classifier shows over 95% classification accuracy for three emotions: "joy", "anger", and "disgust".