• Title/Summary/Keyword: emotion engineering

Search Result 791, Processing Time 0.026 seconds

Designing emotional model and Ontology based on Korean to support extended search of digital music content (디지털 음악 콘텐츠의 확장된 검색을 지원하는 한국어 기반 감성 모델과 온톨로지 설계)

  • Kim, SunKyung;Shin, PanSeop;Lim, HaeChull
    • Journal of the Korea Society of Computer and Information
    • /
    • v.18 no.5
    • /
    • pp.43-52
    • /
    • 2013
  • In recent years, a large amount of music content is distributed in the Internet environment. In order to retrieve the music content effectively that user want, various studies have been carried out. Especially, it is also actively developing music recommendation system combining emotion model with MIR(Music Information Retrieval) studies. However, in these studies, there are several drawbacks. First, structure of emotion model that was used is simple. Second, because the emotion model has not designed for Korean language, there is limit to process the semantic of emotional words expressed with Korean. In this paper, through extending the existing emotion model, we propose a new emotion model KOREM(KORean Emotional Model) based on Korean. And also, we design and implement ontology using emotion model proposed. Through them, sorting, storage and retrieval of music content described with various emotional expression are available.

Maximum Entropy-based Emotion Recognition Model using Individual Average Difference (개인별 평균차를 이용한 최대 엔트로피 기반 감성 인식 모델)

  • Park, So-Young;Kim, Dong-Keun;Whang, Min-Cheol
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.14 no.7
    • /
    • pp.1557-1564
    • /
    • 2010
  • In this paper, we propose a maximum entropy-based emotion recognition model using the individual average difference of emotional signal, because an emotional signal pattern depends on each individual. In order to accurately recognize a user's emotion, the proposed model utilizes the difference between the average of the input emotional signals and the average of each emotional state's signals(such as positive emotional signals and negative emotional signals), rather than only the given input signal. With the aim of easily constructing the emotion recognition model without the professional knowledge of the emotion recognition, it utilizes a maximum entropy model, one of the best-performed and well-known machine learning techniques. Considering that it is difficult to obtain enough training data based on the numerical value of emotional signal for machine learning, the proposed model substitutes two simple symbols such as +(positive number)/-(negative number) for every average difference value, and calculates the average of emotional signals per second rather than the total emotion response time(10 seconds).

A Study on The Improvement of Emotion Recognition by Gender Discrimination (성별 구분을 통한 음성 감성인식 성능 향상에 대한 연구)

  • Cho, Youn-Ho;Park, Kyu-Sik
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.45 no.4
    • /
    • pp.107-114
    • /
    • 2008
  • In this paper, we constructed a speech emotion recognition system that classifies four emotions - neutral, happy, sad, and anger from speech based on male/female gender discrimination. At first, the proposed system distinguish between male and female from a queried speech, then the system performance can be improved by using separate optimized feature vectors for each gender for the emotion classification. As a emotion feature vector, this paper adopts ZCPA(Zero Crossings with Peak Amplitudes) which is well known for its noise-robustic characteristic from the speech recognition area and the features are optimized using SFS method. For a pattern classification of emotion, k-NN and SVM classifiers are compared experimentally. From the computer simulation results, the proposed system was proven to be highly efficient for speech emotion classification about 85.3% regarding four emotion states. This might promise the use the proposed system in various applications such as call-center, humanoid robots, ubiquitous, and etc.

Prototype of Emotion Recognition System for Treatment of Autistic Spectrum Disorder (자폐증 치료를 위한 감성인지 시스템 프로토타입)

  • Chung, Seong Youb
    • Journal of Institute of Convergence Technology
    • /
    • v.1 no.2
    • /
    • pp.1-5
    • /
    • 2011
  • It is known that as many as 15-20 in 10,000 children are diagnosed with autistic spectrum disorder. A framework of the treatment system for children with autism using affective computing technologies was proposed by Chung and Yoon. In this paper, a prototype for the framework is proposed. It consists of emotion stimulating module, multi-modal bio-signal sensing module, treatment module using virtual reality, and emotion recognition module. Primitive experiments on emotion recognition show the usefulness of the proposed system.

  • PDF

Emotional Model via Human Psychological Test and Its Application to Image Retrieval (인간심리를 이용한 감성 모델과 영상검색에의 적용)

  • Yoo, Hun-Woo;Jang, Dong-Sik
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.31 no.1
    • /
    • pp.68-78
    • /
    • 2005
  • A new emotion-based image retrieval method is proposed in this paper. The research was motivated by Soen's evaluation of human emotion on color patterns. Thirteen pairs of adjective words expressing emotion pairs such as like-dislike, beautiful-ugly, natural-unnatural, dynamic-static, warm-cold, gay-sober, cheerful-dismal, unstablestable, light-dark, strong-weak, gaudy-plain, hard-soft, heavy-light are modeled by 19-dimensional color array and $4{\times}3$ gray matrix in off-line. Once the query is presented in text format, emotion model-based query formulation produces the associated color array and gray matrix. Then, images related to the query are retrieved from the database based on the multiplication of color array and gray matrix, each of which is extracted from query and database image. Experiments over 450 images showed an average retrieval rate of 0.61 for the use of color array alone and an average retrieval rate of 0.47 for the use of gray matrix alone.

A Comparison of Effective Feature Vectors for Speech Emotion Recognition (음성신호기반의 감정인식의 특징 벡터 비교)

  • Shin, Bo-Ra;Lee, Soek-Pil
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.67 no.10
    • /
    • pp.1364-1369
    • /
    • 2018
  • Speech emotion recognition, which aims to classify speaker's emotional states through speech signals, is one of the essential tasks for making Human-machine interaction (HMI) more natural and realistic. Voice expressions are one of the main information channels in interpersonal communication. However, existing speech emotion recognition technology has not achieved satisfactory performances, probably because of the lack of effective emotion-related features. This paper provides a survey on various features used for speech emotional recognition and discusses which features or which combinations of the features are valuable and meaningful for the emotional recognition classification. The main aim of this paper is to discuss and compare various approaches used for feature extraction and to propose a basis for extracting useful features in order to improve SER performance.

Extracting and Clustering of Story Events from a Story Corpus

  • Yu, Hye-Yeon;Cheong, Yun-Gyung;Bae, Byung-Chull
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.10
    • /
    • pp.3498-3512
    • /
    • 2021
  • This article describes how events that make up text stories can be represented and extracted. We also address the results from our simple experiment on extracting and clustering events in terms of emotions, under the assumption that different emotional events can be associated with the classified clusters. Each emotion cluster is based on Plutchik's eight basic emotion model, and the attributes of the NLTK-VADER are used for the classification criterion. While comparisons of the results with human raters show less accuracy for certain emotion types, emotion types such as joy and sadness show relatively high accuracy. The evaluation results with NRC Word Emotion Association Lexicon (aka EmoLex) show high accuracy values (more than 90% accuracy in anger, disgust, fear, and surprise), though precision and recall values are relatively low.

The Influence of "Healing Animation" on Psychological Capital of College Students - Based on Emotion Regulation Theory - (치유계 애니메이션이 대학생 심리에 미치는영향 - 정서조절이론 실증연구를 중심으로 -)

  • Ren, Meng Jie;Kim, Chee-Yong
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.9
    • /
    • pp.1338-1347
    • /
    • 2022
  • In recent years, the number of Chinese college students in unhealthy or sub-healthy state in terms of mental health accounts for about 30% and has been going up. As we know, college students are in the middle of youth, an important stage of physical and psychological development. In this case, how to regulate mental health will become an important research direction. Based on emotion regulation theory, this thesis discussed the impact that 'healing animation' had on college students' psychological capital, and how emotion regulation had done its work in the process. According to documents research and empirical study, 'healing animation' is positively affecting the psychological capital of college students through the medium -- emotion regulation. Finally, this thesis provided a practical layout to accumulate college students' psychological capital, and theoretical reference for 'healing animation' as well.

A Design of Artificial Emotion Model (인공 감정 모델의 설계)

  • Lee, In-Geun;Seo, Seok-Tae;Jeong, Hye-Cheon;Gwon, Sun-Hak
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2007.04a
    • /
    • pp.58-62
    • /
    • 2007
  • 인간이 생성한 음성, 표정 영상, 문장 등으로부터 인간의 감정 상태를 인식하는 연구와 함께, 인간의 감정을 모방하여 다양한 외부 자극으로 감정을 생성하는 인공 감정(Artificial Emotion)에 관한 연구가 이루어지고 있다. 그러나 기존의 인공 감정 연구는 외부 감정 자극에 대한 감정 변화 상태를 선형적, 지수적으로 변화시킴으로써 감정 상태가 급격하게 변하는 형태를 보인다. 본 논문에서는 외부 감정 자극의 강도와 빈도뿐만 아니라 자극의 반복 주기를 감정 상태에 반영하고, 시간에 따른 감정의 변화를 Sigmoid 곡선 형태로 표현하는 감정 생성 모델을 제안한다. 그리고 기존의 감정 자극에 대한 회상(recollection)을 통해 외부 감정 자극이 없는 상황에서도 감정을 생성할 수 있는 인공 감정 시스템을 제안한다.

  • PDF

RECOGNIZING SIX EMOTIONAL STATES USING SPEECH SIGNALS

  • Kang, Bong-Seok;Han, Chul-Hee;Youn, Dae-Hee;Lee, Chungyong
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 2000.04a
    • /
    • pp.366-369
    • /
    • 2000
  • This paper examines three algorithms to recognize speaker's emotion using the speech signals. Target emotions are happiness, sadness, anger, fear, boredom and neutral state. MLB(Maximum-Likeligood Bayes), NN(Nearest Neighbor) and HMM (Hidden Markov Model) algorithms are used as the pattern matching techniques. In all cases, pitch and energy are used as the features. The feature vectors for MLB and NN are composed of pitch mean, pitch standard deviation, energy mean, energy standard deviation, etc. For HMM, vectors of delta pitch with delta-delta pitch and delta energy with delta-delta energy are used. We recorded a corpus of emotional speech data and performed the subjective evaluation for the data. The subjective recognition result was 56% and was compared with the classifiers' recognition rates. MLB, NN, and HMM classifiers achieved recognition rates of 68.9%, 69.3% and 89.1% respectively, for the speaker dependent, and context-independent classification.

  • PDF