• Title/Summary/Keyword: emotion engineering

Search Result 791, Processing Time 0.025 seconds

Study of Emotion in Speech (감정변화에 따른 음성정보 분석에 관한 연구)

  • 장인창;박미경;김태수;박면웅
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2004.10a
    • /
    • pp.1123-1126
    • /
    • 2004
  • Recognizing emotion in speech is required lots of spoken language corpus not only at the different emotional statues, but also in individual languages. In this paper, we focused on the changes speech signals in different emotions. We compared the features of speech information like formant and pitch according to the 4 emotions (normal, happiness, sadness, anger). In Korean, pitch data on monophthongs changed in each emotion. Therefore we suggested the suitable analysis techniques using these features to recognize emotions in Korean.

  • PDF

Half-Against-Half Multi-class SVM Classify Physiological Response-based Emotion Recognition

  • Vanny, Makara;Ko, Kwang-Eun;Park, Seung-Min;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.23 no.3
    • /
    • pp.262-267
    • /
    • 2013
  • The recognition of human emotional state is one of the most important components for efficient human-human and human- computer interaction. In this paper, four emotions such as fear, disgust, joy, and neutral was a main problem of classifying emotion recognition and an approach of visual-stimuli for eliciting emotion based on physiological signals of skin conductance (SC), skin temperature (SKT), and blood volume pulse (BVP) was used to design the experiment. In order to reach the goal of solving this problem, half-against-half (HAH) multi-class support vector machine (SVM) with Gaussian radial basis function (RBF) kernel was proposed showing the effective techniques to improve the accuracy rate of emotion classification. The experimental results proved that the proposed was an efficient method for solving the emotion recognition problems with the accuracy rate of 90% of neutral, 86.67% of joy, 85% of disgust, and 80% of fear.

Behavior Decision Model Based on Emotion and Dynamic Personality

  • Yu, Chan-Woo;Choi, Jin-Young
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.101-106
    • /
    • 2005
  • In this paper, we propose a behavior decision model for a robot, which is based on artificial emotion, various motivations and dynamic personality. Our goal is making a robot which can express its emotion human-like way. To achieve this goal, we applied several emotion and personality theories in psychology. Especially, we introduced the concept of dynamic personality model for a robot. Drawing on this concept, we could make a behavior decision model so that the emotion expression of the robot has adaptability to various environments through interactions between human and the robot.

  • PDF

Sensory Engineering Model in Search of Emotion-Specific Physiology -An Introduction and Proposal (정서특정적 생리의 탐색을 모색하는 감성공학의 패러다임과 실천방법)

  • 우제린
    • Science of Emotion and Sensibility
    • /
    • v.4 no.2
    • /
    • pp.1-13
    • /
    • 2001
  • Emotion-Specific Physiology may still remain to bean elusive entity even to many of the proponents and seekers, but an ever-growing body of experimental evidence sheds much brighter prospects for the future researches in that direction. Once such Emotion-Physiology pairs are identified, there exist a high hope that some Sense-Friendly Features that are causally related, or highly correlated, to each pair may be identifiable in the nature or man-made objects. On the premise that certain emotions, if and when engendered by a consumer good, may be conducive to an urge “to own or to identify oneself with the product”, presented here is a model of Sensory Engineering that is oriented objectively towards identifying the Emotion-Specific Physiology in order to have the Sense-Friendly Features reproduced in product designs. Relevant and complementary concepts and some suggested procedures in implementing the proposed model are offered.

  • PDF

A Multimodal Emotion Recognition Using the Facial Image and Speech Signal

  • Go, Hyoun-Joo;Kim, Yong-Tae;Chun, Myung-Geun
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.5 no.1
    • /
    • pp.1-6
    • /
    • 2005
  • In this paper, we propose an emotion recognition method using the facial images and speech signals. Six basic emotions including happiness, sadness, anger, surprise, fear and dislike are investigated. Facia] expression recognition is performed by using the multi-resolution analysis based on the discrete wavelet. Here, we obtain the feature vectors through the ICA(Independent Component Analysis). On the other hand, the emotion recognition from the speech signal method has a structure of performing the recognition algorithm independently for each wavelet subband and the final recognition is obtained from the multi-decision making scheme. After merging the facial and speech emotion recognition results, we obtained better performance than previous ones.

A Study on Method for Extracting Emotion from Painting Based on Color (색상 기반 회화 감성 추출 방법에 관한 연구)

  • Shim, Hyounoh;Park, Seongju;Yoon, Kyunghyun
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.4
    • /
    • pp.717-724
    • /
    • 2016
  • Paintings can evoke emotions in viewers. In this paper, we propose a method for extracting emotion from paintings by using the colors that comprise the paintings. For this, we generate color spectrum from input painting and compare the color spectrum and color combination for finding most similarity color combination. The found color combinations are mapped with emotional keywords. Thus, we extract emotional keyword as the emotion evoked by the painting. Also, we vary the form of algorithms for matching color spectrum and color combinations and extract and compare results by using each algorithm.

Speech Emotion Recognition Using 2D-CNN with Mel-Frequency Cepstrum Coefficients

  • Eom, Youngsik;Bang, Junseong
    • Journal of information and communication convergence engineering
    • /
    • v.19 no.3
    • /
    • pp.148-154
    • /
    • 2021
  • With the advent of context-aware computing, many attempts were made to understand emotions. Among these various attempts, Speech Emotion Recognition (SER) is a method of recognizing the speaker's emotions through speech information. The SER is successful in selecting distinctive 'features' and 'classifying' them in an appropriate way. In this paper, the performances of SER using neural network models (e.g., fully connected network (FCN), convolutional neural network (CNN)) with Mel-Frequency Cepstral Coefficients (MFCC) are examined in terms of the accuracy and distribution of emotion recognition. For Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) dataset, by tuning model parameters, a two-dimensional Convolutional Neural Network (2D-CNN) model with MFCC showed the best performance with an average accuracy of 88.54% for 5 emotions, anger, happiness, calm, fear, and sadness, of men and women. In addition, by examining the distribution of emotion recognition accuracies for neural network models, the 2D-CNN with MFCC can expect an overall accuracy of 75% or more.

Incomplete Cholesky Decomposition based Kernel Cross Modal Factor Analysis for Audiovisual Continuous Dimensional Emotion Recognition

  • Li, Xia;Lu, Guanming;Yan, Jingjie;Li, Haibo;Zhang, Zhengyan;Sun, Ning;Xie, Shipeng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.2
    • /
    • pp.810-831
    • /
    • 2019
  • Recently, continuous dimensional emotion recognition from audiovisual clues has attracted increasing attention in both theory and in practice. The large amount of data involved in the recognition processing decreases the efficiency of most bimodal information fusion algorithms. A novel algorithm, namely the incomplete Cholesky decomposition based kernel cross factor analysis (ICDKCFA), is presented and employed for continuous dimensional audiovisual emotion recognition, in this paper. After the ICDKCFA feature transformation, two basic fusion strategies, namely feature-level fusion and decision-level fusion, are explored to combine the transformed visual and audio features for emotion recognition. Finally, extensive experiments are conducted to evaluate the ICDKCFA approach on the AVEC 2016 Multimodal Affect Recognition Sub-Challenge dataset. The experimental results show that the ICDKCFA method has a higher speed than the original kernel cross factor analysis with the comparable performance. Moreover, the ICDKCFA method achieves a better performance than other common information fusion methods, such as the Canonical correlation analysis, kernel canonical correlation analysis and cross-modal factor analysis based fusion methods.

Development of STEAM Program Based on Emotion Science for Students of Early Elementary School (초등학교 저학년 학생을 위한 감성과학 기반 융합인재교육(STEAM) 프로그램 개발)

  • Kwon, Jieun;Kwak, Sojung;Kim, HeaJin;Lee, SeJung
    • Science of Emotion and Sensibility
    • /
    • v.20 no.4
    • /
    • pp.79-88
    • /
    • 2017
  • As the age in which the importance of sensitivity has increased, education for the future generation regarding emotion engineering, affective recognition and cognitive science have taken center stage. We measure human's emotion quantitatively, analyze evaluation and apply them to various services in life, which are based on human technology. Therefore, we need the education which is related to emotion science to cultivate talented people. The goal of this paper is to suggest the possibility of emotion science education and effective methods through development of the STEAM (Science, Technology, Engineering, Arts, Mathematics) program which can teach emotion science to early elementary school students by applying it to pilot classes. For this study, first, we build a program, 'The mind made by figure' for student of early elementary school. The method of STEAM was used in this program, because it is an effective system to educate the emotion science. We recognize the needs and value of this program development through theory and benchmarking of STEAM related to emotion science. And then the contents of class, activities, course book and kit are designed with elementary school textbook of pertinent grade. Secondly, we analyze the result which is applied in two pilot classes of second grade by satisfaction survey and teacher interview. As a result, the average of satisfaction level was very high (4.40/5), Class participation was especially high. Third, we discuss the ability, value and limits of this program based on the result of analysis. The outcome of this research shows that students of early elementary school who have difficulty in understanding science can approach the education program related to emotion science with ease and interest. We hope this education will help students understand emotion science effectively, and to train people to lead the emotion centered era.

A Survey on Image Emotion Recognition

  • Zhao, Guangzhe;Yang, Hanting;Tu, Bing;Zhang, Lei
    • Journal of Information Processing Systems
    • /
    • v.17 no.6
    • /
    • pp.1138-1156
    • /
    • 2021
  • Emotional semantics are the highest level of semantics that can be extracted from an image. Constructing a system that can automatically recognize the emotional semantics from images will be significant for marketing, smart healthcare, and deep human-computer interaction. To understand the direction of image emotion recognition as well as the general research methods, we summarize the current development trends and shed light on potential future research. The primary contributions of this paper are as follows. We investigate the color, texture, shape and contour features used for emotional semantics extraction. We establish two models that map images into emotional space and introduce in detail the various processes in the image emotional semantic recognition framework. We also discuss important datasets and useful applications in the field such as garment image and image retrieval. We conclude with a brief discussion about future research trends.