• Title/Summary/Keyword: Emotion Classification

Search Result 292, Processing Time 0.028 seconds

A Study on Robust Emotion Classification Structure Between Heterogeneous Speech Databases (이종 음성 DB 환경에 강인한 감성 분류 체계에 대한 연구)

  • Yoon, Won-Jung;Park, Kyu-Sik
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.5
    • /
    • pp.477-482
    • /
    • 2009
  • The emotion recognition system in commercial environments such as call-center undergoes severe system performance degradation and instability due to the speech characteristic differences between the system training database and the input speech of unspecified customers. In order to alleviate these problems, this paper extends traditional method of emotion recognition of neutral/anger into two-step hierarchical structure by using emotional characteristic changes and differences of male and female. The experimental results indicate that the proposed method provides very stable and successful emotional classification performance about 25% over the traditional method of emotion recognition.

Deep Learning based Emotion Classification using Multi Modal Bio-signals (다중 모달 생체신호를 이용한 딥러닝 기반 감정 분류)

  • Lee, JeeEun;Yoo, Sun Kook
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.2
    • /
    • pp.146-154
    • /
    • 2020
  • Negative emotion causes stress and lack of attention concentration. The classification of negative emotion is important to recognize risk factors. To classify emotion status, various methods such as questionnaires and interview are used and it could be changed by personal thinking. To solve the problem, we acquire multi modal bio-signals such as electrocardiogram (ECG), skin temperature (ST), galvanic skin response (GSR) and extract features. The neural network (NN), the deep neural network (DNN), and the deep belief network (DBN) is designed using the multi modal bio-signals to analyze emotion status. As a result, the DBN based on features extracted from ECG, ST and GSR shows the highest accuracy (93.8%). It is 5.7% higher than compared to the NN and 1.4% higher than compared to the DNN. It shows 12.2% higher accuracy than using only single bio-signal (GSR). The multi modal bio-signal acquisition and the deep learning classifier play an important role to classify emotion.

Affective Computing in Education: Platform Analysis and Academic Emotion Classification

  • So, Hyo-Jeong;Lee, Ji-Hyang;Park, Hyun-Jin
    • International journal of advanced smart convergence
    • /
    • v.8 no.2
    • /
    • pp.8-17
    • /
    • 2019
  • The main purpose of this study isto explore the potential of affective computing (AC) platforms in education through two phases ofresearch: Phase I - platform analysis and Phase II - classification of academic emotions. In Phase I, the results indicate that the existing affective analysis platforms can be largely classified into four types according to the emotion detecting methods: (a) facial expression-based platforms, (b) biometric-based platforms, (c) text/verbal tone-based platforms, and (c) mixed methods platforms. In Phase II, we conducted an in-depth analysis of the emotional experience that a learner encounters in online video-based learning in order to establish the basis for a new classification system of online learner's emotions. Overall, positive emotions were shown more frequently and longer than negative emotions. We categorized positive emotions into three groups based on the facial expression data: (a) confidence; (b) excitement, enjoyment, and pleasure; and (c) aspiration, enthusiasm, and expectation. The same method was used to categorize negative emotions into four groups: (a) fear and anxiety, (b) embarrassment and shame, (c) frustration and alienation, and (d) boredom. Drawn from the results, we proposed a new classification scheme that can be used to measure and analyze how learners in online learning environments experience various positive and negative emotions with the indicators of facial expressions.

Multiple Regression-Based Music Emotion Classification Technique (다중 회귀 기반의 음악 감성 분류 기법)

  • Lee, Dong-Hyun;Park, Jung-Wook;Seo, Yeong-Seok
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.7 no.6
    • /
    • pp.239-248
    • /
    • 2018
  • Many new technologies are studied with the arrival of the 4th industrial revolution. In particular, emotional intelligence is one of the popular issues. Researchers are focused on emotional analysis studies for music services, based on artificial intelligence and pattern recognition. However, they do not consider how we recommend proper music according to the specific emotion of the user. This is the practical issue for music-related IoT applications. Thus, in this paper, we propose an probability-based music emotion classification technique that makes it possible to classify music with high precision based on the range of emotion, when developing music related services. For user emotion recognition, one of the popular emotional model, Russell model, is referenced. For the features of music, the average amplitude, peak-average, the number of wavelength, average wavelength, and beats per minute were extracted. Multiple regressions were derived using regression analysis based on the collected data, and probability-based emotion classification was carried out. In our 2 different experiments, the emotion matching rate shows 70.94% and 86.21% by the proposed technique, and 66.83% and 76.85% by the survey participants. From the experiment, the proposed technique generates improved results for music classification.

A Study on The Improvement of Emotion Recognition by Gender Discrimination (성별 구분을 통한 음성 감성인식 성능 향상에 대한 연구)

  • Cho, Youn-Ho;Park, Kyu-Sik
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.45 no.4
    • /
    • pp.107-114
    • /
    • 2008
  • In this paper, we constructed a speech emotion recognition system that classifies four emotions - neutral, happy, sad, and anger from speech based on male/female gender discrimination. At first, the proposed system distinguish between male and female from a queried speech, then the system performance can be improved by using separate optimized feature vectors for each gender for the emotion classification. As a emotion feature vector, this paper adopts ZCPA(Zero Crossings with Peak Amplitudes) which is well known for its noise-robustic characteristic from the speech recognition area and the features are optimized using SFS method. For a pattern classification of emotion, k-NN and SVM classifiers are compared experimentally. From the computer simulation results, the proposed system was proven to be highly efficient for speech emotion classification about 85.3% regarding four emotion states. This might promise the use the proposed system in various applications such as call-center, humanoid robots, ubiquitous, and etc.

A Selection of Optimal EEG Channel for Emotion Analysis According to Music Listening using Stochastic Variables (확률변수를 이용한 음악에 따른 감정분석에의 최적 EEG 채널 선택)

  • Byun, Sung-Woo;Lee, So-Min;Lee, Seok-Pil
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.62 no.11
    • /
    • pp.1598-1603
    • /
    • 2013
  • Recently, researches on analyzing relationship between the state of emotion and musical stimuli are increasing. In many previous works, data sets from all extracted channels are used for pattern classification. But these methods have problems in computational complexity and inaccuracy. This paper proposes a selection of optimal EEG channel to reflect the state of emotion efficiently according to music listening by analyzing stochastic feature vectors. This makes EEG pattern classification relatively simple by reducing the number of dataset to process.

Multimodal Parametric Fusion for Emotion Recognition

  • Kim, Jonghwa
    • International journal of advanced smart convergence
    • /
    • v.9 no.1
    • /
    • pp.193-201
    • /
    • 2020
  • The main objective of this study is to investigate the impact of additional modalities on the performance of emotion recognition using speech, facial expression and physiological measurements. In order to compare different approaches, we designed a feature-based recognition system as a benchmark which carries out linear supervised classification followed by the leave-one-out cross-validation. For the classification of four emotions, it turned out that bimodal fusion in our experiment improves recognition accuracy of unimodal approach, while the performance of trimodal fusion varies strongly depending on the individual. Furthermore, we experienced extremely high disparity between single class recognition rates, while we could not observe a best performing single modality in our experiment. Based on these observations, we developed a novel fusion method, called parametric decision fusion (PDF), which lies in building emotion-specific classifiers and exploits advantage of a parametrized decision process. By using the PDF scheme we achieved 16% improvement in accuracy of subject-dependent recognition and 10% for subject-independent recognition compared to the best unimodal results.

Development of Bio-sensor-Based Feature Extraction and Emotion Recognition Model (바이오센서 기반 특징 추출 기법 및 감정 인식 모델 개발)

  • Cho, Ye Ri;Pae, Dong Sung;Lee, Yun Kyu;Ahn, Woo Jin;Lim, Myo Taeg;Kang, Tae Koo
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.67 no.11
    • /
    • pp.1496-1505
    • /
    • 2018
  • The technology of emotion recognition is necessary for human computer interaction communication. There are many cases where one cannot communicate without considering one's emotion. As such, emotional recognition technology is an essential element in the field of communication. n this regard, it is highly utilized in various fields. Various bio-sensor sensors are used for human emotional recognition and can be used to measure emotions. This paper proposes a system for recognizing human emotions using two physiological sensors. For emotional classification, two-dimensional Russell's emotional model was used, and a method of classification based on personality was proposed by extracting sensor-specific characteristics. In addition, the emotional model was divided into four emotions using the Support Vector Machine classification algorithm. Finally, the proposed emotional recognition system was evaluated through a practical experiment.

Emotion Classification System for Chatting Data (채팅 데이터의 기분 분류 시스템)

  • Yoon, Young-Mi;Lee, Young-Ho
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.5
    • /
    • pp.11-17
    • /
    • 2009
  • It's a trend that the proportion of using an internet messenger among on-line communication methods is getting increased. However, there are not many applications which efficiently utilize these messenger communication data. Messenger communication data have specific characteristics that reflect the user's linguistic habits. The linguistic habits are revealed through frequently used words and emoticons, and user's emotions can be grasped by these. This paper proposes the method that efficiently classifies the emotions of a messenger user using frequently used words or symbols. The emotion classifier from repeated experiments achieves high accuracy of more than 95%.