• Title/Summary/Keyword: Emotion detection

Search Result 114, Processing Time 0.047 seconds

Emotion-based music visualization using LED lighting control system (LED조명 시스템을 이용한 음악 감성 시각화에 대한 연구)

  • Nguyen, Van Loi;Kim, Donglim;Lim, Younghwan
    • Journal of Korea Game Society
    • /
    • v.17 no.3
    • /
    • pp.45-52
    • /
    • 2017
  • This paper proposes a new strategy of emotion-based music visualization. Emotional LED lighting control system is suggested to help audiences enhance the musical experience. In the system, emotion in music is recognized by a proposed algorithm using a dimensional approach. The algorithm used a method of music emotion variation detection to overcome some weaknesses of Thayer's model in detecting emotion in a one-second music segment. In addition, IRI color model is combined with Thayer's model to determine LED light colors corresponding to 36 different music emotions. They are represented on LED lighting control system through colors and animations. The accuracy of music emotion visualization achieved to over 60%.

Practical BioSignal analysis for Nausea detection in VR environment (가상현실환경에서 멀미 측정을 위한 생리신호 분석)

  • Park, M.J.;Kim, H.T.;Park, K.S.
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 2002.11a
    • /
    • pp.267-268
    • /
    • 2002
  • We developed nausea, caused by disorder of autonomic nervous system, detection system using bio-signal analysis and artificial neural network in virtual reality enironment. We used 16 bio-signals, 9 EEGs, EOG, ECG, SKT, PPG, GSR, RSP, EGC, which has own analysis methods. We estimated nausea level by artificial neural network.

  • PDF

Speech and Textual Data Fusion for Emotion Detection: A Multimodal Deep Learning Approach (감정 인지를 위한 음성 및 텍스트 데이터 퓨전: 다중 모달 딥 러닝 접근법)

  • Edward Dwijayanto Cahyadi;Mi-Hwa Song
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.526-527
    • /
    • 2023
  • Speech emotion recognition(SER) is one of the interesting topics in the machine learning field. By developing multi-modal speech emotion recognition system, we can get numerous benefits. This paper explain about fusing BERT as the text recognizer and CNN as the speech recognizer to built a multi-modal SER system.

1/f-LIKE FREQUENCY FLUCTUATION IN FRONTAL ALPHA WAVE AS AN INDICATOR OF EMOTION

  • Yoshida, Tomoyuki
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 2000.04a
    • /
    • pp.99-103
    • /
    • 2000
  • There are two approaches in the study of emotion in the physiological psychology. The first is to clarify the brain mechanism of emotion, and the second is to evaluate objectively emotions using physiological responses along with our feeling experience. The method presented here belongs to the second one. Our method is based on the "level-crossing point detection" method. which involves the analysis of frequency fluctuations of EEG and is characterized by estimation of emotionality using coefficients of slopes in the log-power spectra of frequency fluctuation in alpha waves on both the left and right frontal lobe. In this paper we introduce a new theory of estimation on an individual's emotional state by using our non-invasive and easy measurement apparatus.

  • PDF

Classification of Three Different Emotion by Physiological Parameters

  • Jang, Eun-Hye;Park, Byoung-Jun;Kim, Sang-Hyeob;Sohn, Jin-Hun
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.2
    • /
    • pp.271-279
    • /
    • 2012
  • Objective: This study classified three different emotional states(boredom, pain, and surprise) using physiological signals. Background: Emotion recognition studies have tried to recognize human emotion by using physiological signals. It is important for emotion recognition to apply on human-computer interaction system for emotion detection. Method: 122 college students participated in this experiment. Three different emotional stimuli were presented to participants and physiological signals, i.e., EDA(Electrodermal Activity), SKT(Skin Temperature), PPG(Photoplethysmogram), and ECG (Electrocardiogram) were measured for 1 minute as baseline and for 1~1.5 minutes during emotional state. The obtained signals were analyzed for 30 seconds from the baseline and the emotional state and 27 features were extracted from these signals. Statistical analysis for emotion classification were done by DFA(discriminant function analysis) (SPSS 15.0) by using the difference values subtracting baseline values from the emotional state. Results: The result showed that physiological responses during emotional states were significantly differed as compared to during baseline. Also, an accuracy rate of emotion classification was 84.7%. Conclusion: Our study have identified that emotions were classified by various physiological signals. However, future study is needed to obtain additional signals from other modalities such as facial expression, face temperature, or voice to improve classification rate and to examine the stability and reliability of this result compare with accuracy of emotion classification using other algorithms. Application: This could help emotion recognition studies lead to better chance to recognize various human emotions by using physiological signals as well as is able to be applied on human-computer interaction system for emotion recognition. Also, it can be useful in developing an emotion theory, or profiling emotion-specific physiological responses as well as establishing the basis for emotion recognition system in human-computer interaction.

Emotion Detection Model based on Sequential Neural Networks in Smart Exhibition Environment (스마트 전시환경에서 순차적 인공신경망에 기반한 감정인식 모델)

  • Jung, Min Kyu;Choi, Il Young;Kim, Jae Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.109-126
    • /
    • 2017
  • In the various kinds of intelligent services, many studies for detecting emotion are in progress. Particularly, studies on emotion recognition at the particular time have been conducted in order to provide personalized experiences to the audience in the field of exhibition though facial expressions change as time passes. So, the aim of this paper is to build a model to predict the audience's emotion from the changes of facial expressions while watching an exhibit. The proposed model is based on both sequential neural network and the Valence-Arousal model. To validate the usefulness of the proposed model, we performed an experiment to compare the proposed model with the standard neural-network-based model to compare their performance. The results confirmed that the proposed model considering time sequence had better prediction accuracy.

LSTM Hyperparameter Optimization for an EEG-Based Efficient Emotion Classification in BCI (BCI에서 EEG 기반 효율적인 감정 분류를 위한 LSTM 하이퍼파라미터 최적화)

  • Aliyu, Ibrahim;Mahmood, Raja Majid;Lim, Chang-Gyoon
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.14 no.6
    • /
    • pp.1171-1180
    • /
    • 2019
  • Emotion is a psycho-physiological process that plays an important role in human interactions. Affective computing is centered on the development of human-aware artificial intelligence that can understand and regulate emotions. This field of study is also critical as mental diseases such as depression, autism, attention deficit hyperactivity disorder, and game addiction are associated with emotion. Despite the efforts in emotions recognition and emotion detection from nonstationary, detecting emotions from abnormal EEG signals requires sophisticated learning algorithms because they require a high level of abstraction. In this paper, we investigated LSTM hyperparameters for an optimal emotion EEG classification. Results of several experiments are hereby presented. From the results, optimal LSTM hyperparameter configuration was achieved.

Development of an Emotion Recognition Robot using a Vision Method (비전 방식을 이용한 감정인식 로봇 개발)

  • Shin, Young-Geun;Park, Sang-Sung;Kim, Jung-Nyun;Seo, Kwang-Kyu;Jang, Dong-Sik
    • IE interfaces
    • /
    • v.19 no.3
    • /
    • pp.174-180
    • /
    • 2006
  • This paper deals with the robot system of recognizing human's expression from a detected human's face and then showing human's emotion. A face detection method is as follows. First, change RGB color space to CIElab color space. Second, extract skin candidate territory. Third, detect a face through facial geometrical interrelation by face filter. Then, the position of eyes, a nose and a mouth which are used as the preliminary data of expression, he uses eyebrows, eyes and a mouth. In this paper, the change of eyebrows and are sent to a robot through serial communication. Then the robot operates a motor that is installed and shows human's expression. Experimental results on 10 Persons show 78.15% accuracy.

A Study on EEG-based RT Detection During a Yes/No Cognitive Decision Task (인지적 긍정/부정 선택과제 수행 시 뇌파를 이용한 반응시간의 감지)

  • 신승철;남승훈;류창수;송윤선
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 2002.05a
    • /
    • pp.278-285
    • /
    • 2002
  • 본 논문에서는 인지적 긍정/부정 선택 과제의 수행 시 뇌파를 이용하여 피험자의 반응시간 RT를 감지하는 방법에 관하여 기술한다. 실험 Task에서 피험자는 시각적 자극에 대한 반응, 문제의 해석, 손 움직임의 조절, 손동작 등과 같은 작업을 수행한다. 이와 같은 상황에서의 피험자의 정신상태의 변화를 모델링하고, 선택시간 ST를 감지하여 피험자의 반응시간 RT를 예측한다. ST를 감지하기 위하여 측정한 뇌파로부터 $\alpha$, $\beta$, ${\gamma}$파를 분리하고, 4쌍의 전극들로부터 3가지의 특징들을 추출한다. 추출한 특징들을 분석하여 각 피험자별로 나타나는 상세 규칙과 공통적인 특성인 일반 규칙들을 설정하고 이들을 적용한다. 4명의 피험자를 대상으로 평균 81%의 ST 감지 성공률을 보이고, ST 감지 이후 약 0.73초에서 RT가 나타나는 것을 보인다. 본 논문에서 제안한 방법을 기존의 인지적인 정신상태 판별을 위한 방법들이나 왼손/오른손 동작구분 방법들과 결합하여 사용할 경우 BCI를 위한 기반 기술로 활용될 것으로 기대한다.

  • PDF