• Title/Summary/Keyword: classification of emotion

Search Result 292, Processing Time 0.024 seconds

A Study on Classification of Four Emotions using EEG (뇌파를 이용한 4가지 감정 분류에 관한 연구)

  • 강동기;김동준;김흥환;고한우
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 2001.11a
    • /
    • pp.87-90
    • /
    • 2001
  • 본 연구에서는 감성 평가 시스템에 가장 적합한 파라미터를 찾기 위하여 3가지 뇌파 파라미터를 이용하여 감정 분류 실험을 하였다. 뇌파 파라미터는 선형예측기계수(linear predictor coefficients)와 FFT 스펙트럼 및 AR 스펙트럼의 밴드별 상호상관계수(cross-correlation coefficients)를 이용하였으며, 감정은 relaxation, joy, sadness, irritation으로 설정하였다. 뇌파 데이터는 대학의 연극동아리 학생 4명을 대상으로 수집하였으며, 전극 위치는 Fp1, Fp2, F3, F4, T3, T4, P3, P4, O1, O2를 사용하였다. 수집된 뇌파 데이터는 전처리를 거친 후 특징 파라미터를 추출하고 패턴 분류기로 사용된 신경회로망(neural network)에 입력하여 감정 분류를 하였다. 감정 분류실험 결과 선형예측기계수를 이용하는 것이 다른 2가지 보다 좋은 성능을 나타내었다.

  • PDF

The Study of Changing Polysomnograph for 2 Dimension Emotion Classification (2차원 감성분류를 위한 생리신호 변화에 대한 연구)

  • 남승훈;황민철;임좌상;박흥국;조상현
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 1999.11a
    • /
    • pp.396-400
    • /
    • 1999
  • 인간의 감성은 다차원적 감정으로 이루어져 있다. 본 연구는 감성의 2차원 구조를 근거로 쾌-불쾌, 각성-이완 2차원적 감성을 생리신호로 분류하고자 하였다. 20명 남녀 대학생을 참가시켜 자극을 2차원 감성자극(쾌(펜디향수), 불쾌(에탄올), 각성(싸이렌), 이완(가요))으로 정의하고, 2*2 자극제시로 감성을 유발하였다. 26명의 남녀대학생을 실험에 참가시켜 4가지 감성을 유발하여, 측정한 생리신호로는 중추신경계의 활동을 나타내는 EEG(f3, p3, f4, p4)를 측정하였으며, 자율신경계의 활동을 나타내는 ECG(lead II), GSR, SKT를 측정하였다. 각각의 측정한 신호들에 대한 t-test를 실시하여 유의성 있는 변수를 추출하였으며 추출된 변수는 EEG의 f3(beta), p3(delta, beta), f4(delta), p4(alpha), HRV의 HF, HF/LF, GSR의 rising time이었으며 2차원 감성을 분류하였다.

  • PDF

Group Emotion Prediction System based on Modular Bayesian Networks (모듈형 베이지안 네트워크 기반 대중 감성 예측 시스템)

  • Choi, SeulGi;Cho, Sung-Bae
    • Journal of KIISE
    • /
    • v.44 no.11
    • /
    • pp.1149-1155
    • /
    • 2017
  • Recently, with the development of communication technology, it has become possible to collect various sensor data that indicate the environmental stimuli within a space. In this paper, we propose a group emotion prediction system using a modular Bayesian network that was designed considering the psychological impact of environmental stimuli. A Bayesian network can compensate for the uncertain and incomplete characteristics of the sensor data by the probabilistic consideration of the evidence for reasoning. Also, modularizing the Bayesian network has enabled flexible response and efficient reasoning of environmental stimulus fluctuations within the space. To verify the performance of the system, we predict public emotion based on the brightness, volume, temperature, humidity, color temperature, sound, smell, and group emotion data collected in a kindergarten. Experimental results show that the accuracy of the proposed method is 85% greater than that of other classification methods. Using quantitative and qualitative analyses, we explore the possibilities and limitations of probabilistic methodology for predicting group emotion.

Emotion Recognition of Low Resource (Sindhi) Language Using Machine Learning

  • Ahmed, Tanveer;Memon, Sajjad Ali;Hussain, Saqib;Tanwani, Amer;Sadat, Ahmed
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.8
    • /
    • pp.369-376
    • /
    • 2021
  • One of the most active areas of research in the field of affective computing and signal processing is emotion recognition. This paper proposes emotion recognition of low-resource (Sindhi) language. This work's uniqueness is that it examines the emotions of languages for which there is currently no publicly accessible dataset. The proposed effort has provided a dataset named MAVDESS (Mehran Audio-Visual Dataset Mehran Audio-Visual Database of Emotional Speech in Sindhi) for the academic community of a significant Sindhi language that is mainly spoken in Pakistan; however, no generic data for such languages is accessible in machine learning except few. Furthermore, the analysis of various emotions of Sindhi language in MAVDESS has been carried out to annotate the emotions using line features such as pitch, volume, and base, as well as toolkits such as OpenSmile, Scikit-Learn, and some important classification schemes such as LR, SVC, DT, and KNN, which will be further classified and computed to the machine via Python language for training a machine. Meanwhile, the dataset can be accessed in future via https://doi.org/10.5281/zenodo.5213073.

Wavelet-based Statistical Noise Detection and Emotion Classification Method for Improving Multimodal Emotion Recognition (멀티모달 감정인식률 향상을 위한 웨이블릿 기반의 통계적 잡음 검출 및 감정분류 방법 연구)

  • Yoon, Jun-Han;Kim, Jin-Heon
    • Journal of IKEEE
    • /
    • v.22 no.4
    • /
    • pp.1140-1146
    • /
    • 2018
  • Recently, a methodology for analyzing complex bio-signals using a deep learning model has emerged among studies that recognize human emotions. At this time, the accuracy of emotion classification may be changed depending on the evaluation method and reliability depending on the kind of data to be learned. In the case of biological signals, the reliability of data is determined according to the noise ratio, so that the noise detection method is as important as that. Also, according to the methodology for defining emotions, appropriate emotional evaluation methods will be needed. In this paper, we propose a wavelet -based noise threshold setting algorithm for verifying the reliability of data for multimodal bio-signal data labeled Valence and Arousal and a method for improving the emotion recognition rate by weighting the evaluation data. After extracting the wavelet component of the signal using the wavelet transform, the distortion and kurtosis of the component are obtained, the noise is detected at the threshold calculated by the hampel identifier, and the training data is selected considering the noise ratio of the original signal. In addition, weighting is applied to the overall evaluation of the emotion recognition rate using the euclidean distance from the median value of the Valence-Arousal plane when classifying emotional data. To verify the proposed algorithm, we use ASCERTAIN data set to observe the degree of emotion recognition rate improvement.

Smart Emotion Management System based on multi-biosignal Analysis using Artificial Intelligence (인공지능을 활용한 다중 생체신호 분석 기반 스마트 감정 관리 시스템)

  • Noh, Ayoung;Kim, Youngjoon;Kim, Hyeong-Su;Kim, Won-Tae
    • Journal of IKEEE
    • /
    • v.21 no.4
    • /
    • pp.397-403
    • /
    • 2017
  • In the modern society, psychological diseases and impulsive crimes due to stress are occurring. In order to reduce the stress, the existing treatment methods consisted of continuous visit counseling to determine the psychological state and prescribe medication or psychotherapy. Although this face-to-face counseling method is effective, it takes much time to determine the state of the patient, and there is a problem of treatment efficiency that is difficult to be continuously managed depending on the individual situation. In this paper, we propose an artificial intelligence emotion management system that emotions of user monitor in real time and induced to a table state. The system measures multiple bio-signals based on the PPG and the GSR sensors, preprocesses the data into appropriate data types, and classifies four typical emotional states such as pleasure, relax, sadness, and horror through the SVM algorithm. We verify that the emotion of the user is guided to a stable state by providing a real-time emotion management service when the classification result is judged to be a negative state such as sadness or fear through experiments.

Korean Facial Expression Emotion Recognition based on Image Meta Information (이미지 메타 정보 기반 한국인 표정 감정 인식)

  • Hyeong Ju Moon;Myung Jin Lim;Eun Hee Kim;Ju Hyun Shin
    • Smart Media Journal
    • /
    • v.13 no.3
    • /
    • pp.9-17
    • /
    • 2024
  • Due to the recent pandemic and the development of ICT technology, the use of non-face-to-face and unmanned systems is expanding, and it is very important to understand emotions in communication in non-face-to-face situations. As emotion recognition methods for various facial expressions are required to understand emotions, artificial intelligence-based research is being conducted to improve facial expression emotion recognition in image data. However, existing research on facial expression emotion recognition requires high computing power and a lot of learning time because it utilizes a large amount of data to improve accuracy. To improve these limitations, this paper proposes a method of recognizing facial expressions using age and gender, which are image meta information, as a method of recognizing facial expressions with even a small amount of data. For facial expression emotion recognition, a face was detected using the Yolo Face model from the original image data, and age and gender were classified through the VGG model based on image meta information, and then seven emotions were recognized using the EfficientNet model. The accuracy of the proposed data classification learning model was higher as a result of comparing the meta-information-based data classification model with the model trained with all data.

Analysis of Facial Movement According to Opposite Emotions (상반된 감성에 따른 안면 움직임 차이에 대한 분석)

  • Lee, Eui Chul;Kim, Yoon-Kyoung;Bea, Min-Kyoung;Kim, Han-Sol
    • The Journal of the Korea Contents Association
    • /
    • v.15 no.10
    • /
    • pp.1-9
    • /
    • 2015
  • In this paper, a study on facial movements are analyzed in terms of opposite emotion stimuli by image processing of Kinect facial image. To induce two opposite emotion pairs such as "Sad - Excitement"and "Contentment - Angry" which are oppositely positioned onto Russell's 2D emotion model, both visual and auditory stimuli are given to subjects. Firstly, 31 main points are chosen among 121 facial feature points of active appearance model obtained from Kinect Face Tracking SDK. Then, pixel changes around 31 main points are analyzed. In here, local minimum shift matching method is used in order to solve a problem of non-linear facial movement. At results, right and left side facial movements were occurred in cases of "Sad" and "Excitement" emotions, respectively. Left side facial movement was comparatively more occurred in case of "Contentment" emotion. In contrast, both left and right side movements were occurred in case of "Angry" emotion.

A Transformer-Based Emotion Classification Model Using Transfer Learning and SHAP Analysis (전이 학습 및 SHAP 분석을 활용한 트랜스포머 기반 감정 분류 모델)

  • Subeen Leem;Byeongcheon Lee;Insu Jeon;Jihoon Moon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.05a
    • /
    • pp.706-708
    • /
    • 2023
  • In this study, we embark on a journey to uncover the essence of emotions by exploring the depths of transfer learning on three pre-trained transformer models. Our quest to classify five emotions culminates in discovering the KLUE (Korean Language Understanding Evaluation)-BERT (Bidirectional Encoder Representations from Transformers) model, which is the most exceptional among its peers. Our analysis of F1 scores attests to its superior learning and generalization abilities on the experimental data. To delve deeper into the mystery behind its success, we employ the powerful SHAP (Shapley Additive Explanations) method to unravel the intricacies of the KLUE-BERT model. The findings of our investigation are presented with a mesmerizing text plot visualization, which serves as a window into the model's soul. This approach enables us to grasp the impact of individual tokens on emotion classification and provides irrefutable, visually appealing evidence to support the predictions of the KLUE-BERT model.