• Title/Summary/Keyword: Emotion Extraction

Search Result 113, Processing Time 0.025 seconds

Development of Emotion Recongition System Using Facial Image (얼굴 영상을 이용한 감정 인식 시스템 개발)

  • Kim, M.H.;Joo, Y.H.;Park, J.B.;Lee, J.;Cho, Y.J.
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.15 no.2
    • /
    • pp.191-196
    • /
    • 2005
  • Although the technology for emotion recognition is important one which was demanded in various fields, it still remains as the unsolved problems. Especially, there is growing demand for emotion recognition technology based on racial image. The facial image based emotion recognition system is complex system comprised of various technologies. Therefore, various techniques such that facial image analysis, feature vector extraction, pattern recognition technique, and etc, are needed in order to develop this system. In this paper, we propose new emotion recognition system based un previously studied facial image analysis technique. The proposed system recognizes the emotion by using the fuzzy classifier. The facial image database is built up and the performance of the proposed system is verified by using built database.

Optimal Facial Emotion Feature Analysis Method based on ASM-LK Optical Flow (ASM-LK Optical Flow 기반 최적 얼굴정서 특징분석 기법)

  • Ko, Kwang-Eun;Park, Seung-Min;Park, Jun-Heong;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.4
    • /
    • pp.512-517
    • /
    • 2011
  • In this paper, we propose an Active Shape Model (ASM) and Lucas-Kanade (LK) optical flow-based feature extraction and analysis method for analyzing the emotional features from facial images. Considering the facial emotion feature regions are described by Facial Action Coding System, we construct the feature-related shape models based on the combination of landmarks and extract the LK optical flow vectors at each landmarks based on the centre pixels of motion vector window. The facial emotion features are modelled by the combination of the optical flow vectors and the emotional states of facial image can be estimated by the probabilistic estimation technique, such as Bayesian classifier. Also, we extract the optimal emotional features that are considered the high correlation between feature points and emotional states by using common spatial pattern (CSP) analysis in order to improvise the operational efficiency and accuracy of emotional feature extraction process.

Multi-Time Window Feature Extraction Technique for Anger Detection in Gait Data

  • Beom Kwon;Taegeun Oh
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.4
    • /
    • pp.41-51
    • /
    • 2023
  • In this paper, we propose a technique of multi-time window feature extraction for anger detection in gait data. In the previous gait-based emotion recognition methods, the pedestrian's stride, time taken for one stride, walking speed, and forward tilt angles of the neck and thorax are calculated. Then, minimum, mean, and maximum values are calculated for the entire interval to use them as features. However, each feature does not always change uniformly over the entire interval but sometimes changes locally. Therefore, we propose a multi-time window feature extraction technique that can extract both global and local features, from long-term to short-term. In addition, we also propose an ensemble model that consists of multiple classifiers. Each classifier is trained with features extracted from different multi-time windows. To verify the effectiveness of the proposed feature extraction technique and ensemble model, a public three-dimensional gait dataset was used. The simulation results demonstrate that the proposed ensemble model achieves the best performance compared to machine learning models trained with existing feature extraction techniques for four performance evaluation metrics.

Extraction of Representative Emotions for Evaluations of Tactile Impressions in a Car Interior (자동차 인테리어의 촉감 평가를 위한 대표감성 추출)

  • Park, Nam-Choon;Jeong, Seong-Won
    • Science of Emotion and Sensibility
    • /
    • v.16 no.2
    • /
    • pp.157-166
    • /
    • 2013
  • There are few that evaluate tactile emotion as it pertains to car interior parts, while studies on visual evaluations of car interiors as well as usability tests in a visual sense are numerous. The purpose of this study is to determine typical in-vehicle tactile emotions so that they can be used to evaluate tactile impressions of car interior parts. 52 words related to tactile impressions of car interiors were gathered from a survey in conjunction with an in-vehicle test, interviews with the car salespersons, and an analysis of car reviews. After a factor analysis with 52 words, 10 categories of major tactile emotions were clustered. These were roughness, toughness, friction, comfortability, stiffness, softness, temperature, sleekness, familiarity, and flexibility. These representative tactile emotions regarding a car interior can be used to evaluate tactile impressions of surfaces such as leather, plastic, metal and wood when used as parts in car interiors.

  • PDF

Emotion Recognition Based on Facial Expression by using Context-Sensitive Bayesian Classifier (상황에 민감한 베이지안 분류기를 이용한 얼굴 표정 기반의 감정 인식)

  • Kim, Jin-Ok
    • The KIPS Transactions:PartB
    • /
    • v.13B no.7 s.110
    • /
    • pp.653-662
    • /
    • 2006
  • In ubiquitous computing that is to build computing environments to provide proper services according to user's context, human being's emotion recognition based on facial expression is used as essential means of HCI in order to make man-machine interaction more efficient and to do user's context-awareness. This paper addresses a problem of rigidly basic emotion recognition in context-sensitive facial expressions through a new Bayesian classifier. The task for emotion recognition of facial expressions consists of two steps, where the extraction step of facial feature is based on a color-histogram method and the classification step employs a new Bayesian teaming algorithm in performing efficient training and test. New context-sensitive Bayesian learning algorithm of EADF(Extended Assumed-Density Filtering) is proposed to recognize more exact emotions as it utilizes different classifier complexities for different contexts. Experimental results show an expression classification accuracy of over 91% on the test database and achieve the error rate of 10.6% by modeling facial expression as hidden context.

RoutingConvNet: A Light-weight Speech Emotion Recognition Model Based on Bidirectional MFCC (RoutingConvNet: 양방향 MFCC 기반 경량 음성감정인식 모델)

  • Hyun Taek Lim;Soo Hyung Kim;Guee Sang Lee;Hyung Jeong Yang
    • Smart Media Journal
    • /
    • v.12 no.5
    • /
    • pp.28-35
    • /
    • 2023
  • In this study, we propose a new light-weight model RoutingConvNet with fewer parameters to improve the applicability and practicality of speech emotion recognition. To reduce the number of learnable parameters, the proposed model connects bidirectional MFCCs on a channel-by-channel basis to learn long-term emotion dependence and extract contextual features. A light-weight deep CNN is constructed for low-level feature extraction, and self-attention is used to obtain information about channel and spatial signals in speech signals. In addition, we apply dynamic routing to improve the accuracy and construct a model that is robust to feature variations. The proposed model shows parameter reduction and accuracy improvement in the overall experiments of speech emotion datasets (EMO-DB, RAVDESS, and IEMOCAP), achieving 87.86%, 83.44%, and 66.06% accuracy respectively with about 156,000 parameters. In this study, we proposed a metric to calculate the trade-off between the number of parameters and accuracy for performance evaluation against light-weight.

Multi-Emotion Regression Model for Recognizing Inherent Emotions in Speech Data (음성 데이터의 내재된 감정인식을 위한 다중 감정 회귀 모델)

  • Moung Ho Yi;Myung Jin Lim;Ju Hyun Shin
    • Smart Media Journal
    • /
    • v.12 no.9
    • /
    • pp.81-88
    • /
    • 2023
  • Recently, communication through online is increasing due to the spread of non-face-to-face services due to COVID-19. In non-face-to-face situations, the other person's opinions and emotions are recognized through modalities such as text, speech, and images. Currently, research on multimodal emotion recognition that combines various modalities is actively underway. Among them, emotion recognition using speech data is attracting attention as a means of understanding emotions through sound and language information, but most of the time, emotions are recognized using a single speech feature value. However, because a variety of emotions exist in a complex manner in a conversation, a method for recognizing multiple emotions is needed. Therefore, in this paper, we propose a multi-emotion regression model that extracts feature vectors after preprocessing speech data to recognize complex, inherent emotions and takes into account the passage of time.

AM-FM Decomposition and Estimation of Instantaneous Frequency and Instantaneous Amplitude of Speech Signals for Natural Human-robot Interaction (자연스런 인간-로봇 상호작용을 위한 음성 신호의 AM-FM 성분 분해 및 순간 주파수와 순간 진폭의 추정에 관한 연구)

  • Lee, He-Young
    • Speech Sciences
    • /
    • v.12 no.4
    • /
    • pp.53-70
    • /
    • 2005
  • A Vowel of speech signals are multicomponent signals composed of AM-FM components whose instantaneous frequency and instantaneous amplitude are time-varying. The changes of emotion states cause the variation of the instantaneous frequencies and the instantaneous amplitudes of AM-FM components. Therefore, it is important to estimate exactly the instantaneous frequencies and the instantaneous amplitudes of AM-FM components for the extraction of key information representing emotion states and changes in speech signals. In tills paper, firstly a method decomposing speech signals into AM - FM components is addressed. Secondly, the fundamental frequency of vowel sound is estimated by the simple method based on the spectrogram. The estimate of the fundamental frequency is used for decomposing speech signals into AM-FM components. Thirdly, an estimation method is suggested for separation of the instantaneous frequencies and the instantaneous amplitudes of the decomposed AM - FM components, based on Hilbert transform and the demodulation property of the extended Fourier transform. The estimates of the instantaneous frequencies and the instantaneous amplitudes can be used for modification of the spectral distribution and smooth connection of two words in the speech synthesis systems based on a corpus.

  • PDF

Top-down Behavior Planning for Real-life Simulation

  • Wei, Song;Cho, Kyung-Eun;Um, Ky-Hyun
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.12
    • /
    • pp.1714-1725
    • /
    • 2007
  • This paper describes a top-down behavior planning framework in a simulation game from personality to real life action selection. The combined behavior creating system is formed by five levels of specification, which are personality definition, motivation extraction, emotion generation, decision making and action execution. Along with the data flowing process in our designed framework, NPC selects actions autonomously to adapt to the dynamic environment information resulting from active agents and human players. Furthermore, we illuminate applying Gaussian probabilistic distribution to realize character's behavior changeability like human performance. To elucidate the mechanism of the framework, we situated it in a restaurant simulation game.

  • PDF

A Study of feature-Extraction from the Specifically Intoned Product Design (제품의 특성추출을 통한 디자인 적용 방법에 관한 연구)

  • Jo, Gwang-Su
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 2007.05a
    • /
    • pp.139-142
    • /
    • 2007
  • 본 연구의 목적은 특정 목적을 가진 제품들의 특성들을 파악하여 디자인 개발시 이러한 특성들을 제품 컨셉 또는 디자인 형태에 응용하고자 함이다. 이를 위해 먼저 실험 대상을 설정하였고, 실험 대상을 선택한 후 실험 대상에 관한 기초 설문과 실험 대상 이미지 분석을 실시하였다. 이후 실험 대상의 디자인과 기능적 요소를 추출하여 코딩하였다. 그리고 실험 대상의 이미지분석 후 얻은 요소와 실험 대상의 요소의 관계를 증명하였으며, 실험 대상의 특성 추출을 위한 설문을 실시하였다. 이러한 실험 프로세스를 거쳐 특정한 제품에 특성들을 추출함으로써 디자인 개발 시 소비자 니즈의 분석이 가능하며, 제품을 이해하는 기초 자료로 사용이 가능하다. 또한 디자이너가 제품을 쉽게 이해하고 디자인 개발 시 컨셉 설정에 큰 기초가 된다. 본 연구의 MP3의 경우 MP3의 이미지 분석 결과 음악성, 확장성, 휴대성, 사용성, 신체 부담감, 인터페이스, 그리고 개성으로 나타났으며, 이들과 각각 연관된 특성들을 찾았다. 이로써 MP3를 디자인할 때 중요 특성들을 제시하였다. 이러한 기초 연구를 통해 보다 효과적인 소비자 니즈 파악이 가능하고, 디자인 기초 학문 발전을 가져올 것이다.

  • PDF