• Title/Summary/Keyword: Emotion Classification

Search Result 299, Processing Time 0.023 seconds

CLASSIFICATION OF BINARY DECISION RESPONSES USING EEG (뇌파를 이용한 양분법적 판단반응의 분류)

  • 문성실;최상섭;류창수;손진훈
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 1999.03a
    • /
    • pp.281-284
    • /
    • 1999
  • 본 연구는 인간의 뇌파로부터 간단한 의사 표시를 식별하는 기술을 얻어 뇌파인터페이스를 구현하기 위한 기초연구로서 수행되었다. 실험에 참가한 피험자들은 컴퓨터 화면에 나타나는 문제를 본 후 답을 제시받았을 때 이것이 옳은지, 그른지에 대한 양분법적 판단반응을 해야하며, 이때 동시에 뇌파가 기록되었다. 옳다는 긍정반응과, 옳지 않다는 부정반응시의 뇌파를 비교한 결과 전두엽 부위의 fp1, f3, f4 부위에서 부정의 대답을 할 경우 theta파와 fast alpha파의 상대적 출현량이 긍정의 경우에 비하여 통계적으로 유의하게 컸다.

  • PDF

The Effect of Personality on Psychological Responses Induced by Emotional Stimuli for Children

  • Jang, Eun Hye;Eum, Youngji;Kim, Suk-Hee;Sohn, Jin-Hun
    • Journal of the Ergonomics Society of Korea
    • /
    • v.33 no.5
    • /
    • pp.323-335
    • /
    • 2014
  • Objective: The aim of this study is to identify the relationship between personality and psychological responses induced by emotional stimuli (happiness, sadness, anger, boring and stress) for children. Background: Many researches are interested in assertion that there is close correlation between personality and emotion. The relationship between personality and emotion needs to be studied in view of the extended integration, not in view of respective property, because personality is deeply ingrained, and the relatively enduring patterns of thought, feeling and behavior and emotion can take advantage of individual differences in sensitivities to situational cues and predispositions to emotional state. In particular, studies on the personality and emotion for children are necessary in that childhood is an important period for formation of their personality and emotion expression and regulation. Method: Prior to the experiment, we made parents of 94 children rate personalities of their children, based on Korean Personality Inventory for Children (K-PIC). Results of 64 children without missing answers to all questions were analyzed. 64 children were exposed to five emotional stimuli and were asked to report the classification and intensity of their experienced emotion. Results: Children were classified into two groups of the lower 25% and higher 25% scores in twenty sub-scales of K-PIC, and psychological responses to five emotional stimuli between two groups were compared. Accuracy of emotion experienced by emotional stimuli showed a significant difference between the two groups, the lower and higher scores in Hyperactivity and Adjustment. Also, there was a significant difference in the intensity of experienced emotions between the two groups in Intellectual Screening and Psychosis. Conclusion: Our result has shown that hyperactivity, adjustment, intellectual screening and psychosis influence the accuracy and intensity of emotional responses. Application: This study can offer a guideline to overcome methodological limitation of emotion studies for children and help researcher basically understand and recognize human emotion in HCI.

A Study on the Dataset of the Korean Multi-class Emotion Analysis in Radio Listeners' Messages (라디오 청취자 문자 사연을 활용한 한국어 다중 감정 분석용 데이터셋연구)

  • Jaeah, Lee;Gooman, Park
    • Journal of Broadcast Engineering
    • /
    • v.27 no.6
    • /
    • pp.940-943
    • /
    • 2022
  • This study aims to analyze the Korean dataset by performing Korean sentence Emotion Analysis in the radio listeners' text messages collected personally. Currently, in Korea, research on the Emotion Analysis of Korean sentences is variously continuing. However, it is difficult to expect high accuracy of Emotion Analysis due to the linguistic characteristics of Korean. In addition, a lot of research has been done on Binary Sentiment Analysis that allows positive/negative classification only, but Multi-class Emotion Analysis that is classified into three or more emotions requires more research. In this regard, it is necessary to consider and analyze the Korean dataset to increase the accuracy of Multi-class Emotion Analysis for Korean. In this paper, we analyzed why Korean Emotion Analysis is difficult in the process of conducting Emotion Analysis through surveys and experiments, proposed a method for creating a dataset that can improve accuracy and can be used as a basis for Emotion Analysis of Korean sentences.

Development of Sensibility Vocabulary Classification System for Sensibility Evaluation of Visitors According to Forest Environment

  • Lee, Jeong-Do;Joung, Dawou;Hong, Sung-Jun;Kim, Da-Young;Park, Bum-Jin
    • Journal of People, Plants, and Environment
    • /
    • v.22 no.2
    • /
    • pp.209-217
    • /
    • 2019
  • Generally human sensibility is expressed in a certain language. To discover the sensibility of visitors in relation to the forest environment, it is first necessary to determine their exact meanings. Furthermore, it is necessary to sort these terms according to their meanings based on an appropriate classification system. This study attempted to develop a classification system for forest sensibility vocabulary by extracting Korean words used by forest visitors to express their sensibilities in relation to the forest environment, and established the structure of the system to classify the accumulated vocabulary. For this purpose, we extracted forest sensibility words based on literature review of experiences reported in the past as well as interviews of forest visitors, and categorized the words by meanings using the Standard Korean Language Dictionary maintained by the National Institute of the Korean Language. Next, the classification system for these words was established with reference to the classification system for vocabulary in the Korean language examined in previous studies of Korean language and literature. As a result, 137 forest sensibility words were collected using a documentary survey, and we categorized these words into four types: emotion, sense, evaluation, and existence. Categorizing the collected forest sensibility words based on this Korean language classification system resulted in the extraction of 40 representative sensibility words. This experiment enabled us to determine from where our sensibilities that find expressions in the forest are derived, that is, from sight, hearing, smell, taste, or touch, along with various other aspects of how our human sensibilities are expressed such as whether the subject of a word is person-centered or object-centered. We believe that the results of this study can serve as foundational data about forest sensibility.

Emotion Recognition Method of Competition-Cooperation Using Electrocardiogram (심전도를 이용한 경쟁-협력의 감성 인식 방법)

  • Park, Sangin;Lee, Don Won;Mun, Sungchul;Whang, Mincheol
    • Science of Emotion and Sensibility
    • /
    • v.21 no.3
    • /
    • pp.73-82
    • /
    • 2018
  • Attempts have been made to recognize social emotion, including competition-cooperation, while designing interaction in work places. This study aimed to determine the cardiac response associated with classifying competition-cooperation of social emotion. Sixty students from Sangmyung University participated in the study and were asked to play a pattern game to experience the social emotion associated with competition and cooperation. Electrocardiograms were measured during the task and were analyzed to obtain time domain indicators, such as RRI, SDNN, and pNN50, and frequency domain indicators, such as VLF, LF, HF, VLF/HF, LF/HF, lnVLF, lnLF, lnHF, and lnVLF/lnHF. The significance of classifying social emotions was assessed using an independent t-test. The rule-base for the classification was determined using significant parameters of 30 participants and verified from data obtained from another 30 participants. As a result, 91.67% participants were correctly classified. This study proposes a new method of classifying social emotions of competition and cooperation and provides objective data for designing social interaction.

Study of Emotion Recognition based on Facial Image for Emotional Rehabilitation Biofeedback (정서재활 바이오피드백을 위한 얼굴 영상 기반 정서인식 연구)

  • Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.10
    • /
    • pp.957-962
    • /
    • 2010
  • If we want to recognize the human's emotion via the facial image, first of all, we need to extract the emotional features from the facial image by using a feature extraction algorithm. And we need to classify the emotional status by using pattern classification method. The AAM (Active Appearance Model) is a well-known method that can represent a non-rigid object, such as face, facial expression. The Bayesian Network is a probability based classifier that can represent the probabilistic relationships between a set of facial features. In this paper, our approach to facial feature extraction lies in the proposed feature extraction method based on combining AAM with FACS (Facial Action Coding System) for automatically modeling and extracting the facial emotional features. To recognize the facial emotion, we use the DBNs (Dynamic Bayesian Networks) for modeling and understanding the temporal phases of facial expressions in image sequences. The result of emotion recognition can be used to rehabilitate based on biofeedback for emotional disabled.

Emotion Classification Using EEG Spectrum Analysis and Bayesian Approach (뇌파 스펙트럼 분석과 베이지안 접근법을 이용한 정서 분류)

  • Chung, Seong Youb;Yoon, Hyun Joong
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.37 no.1
    • /
    • pp.1-8
    • /
    • 2014
  • This paper proposes an emotion classifier from EEG signals based on Bayes' theorem and a machine learning using a perceptron convergence algorithm. The emotions are represented on the valence and arousal dimensions. The fast Fourier transform spectrum analysis is used to extract features from the EEG signals. To verify the proposed method, we use an open database for emotion analysis using physiological signal (DEAP) and compare it with C-SVC which is one of the support vector machines. An emotion is defined as two-level class and three-level class in both valence and arousal dimensions. For the two-level class case, the accuracy of the valence and arousal estimation is 67% and 66%, respectively. For the three-level class case, the accuracy is 53% and 51%, respectively. Compared with the best case of the C-SVC, the proposed classifier gave 4% and 8% more accurate estimations of valence and arousal for the two-level class. In estimation of three-level class, the proposed method showed a similar performance to the best case of the C-SVC.

Sentiment Analysis on 'HelloTalk' App Reviews Using NRC Emotion Lexicon and GoEmotions Dataset

  • Simay Akar;Yang Sok Kim;Mi Jin Noh
    • Smart Media Journal
    • /
    • v.13 no.6
    • /
    • pp.35-43
    • /
    • 2024
  • During the post-pandemic period, the interest in foreign language learning surged, leading to increased usage of language-learning apps. With the rising demand for these apps, analyzing app reviews becomes essential, as they provide valuable insights into user experiences and suggestions for improvement. This research focuses on extracting insights into users' opinions, sentiments, and overall satisfaction from reviews of HelloTalk, one of the most renowned language-learning apps. We employed topic modeling and emotion analysis approaches to analyze reviews collected from the Google Play Store. Several experiments were conducted to evaluate the performance of sentiment classification models with different settings. In addition, we identified dominant emotions and topics within the app reviews using feature importance analysis. The experimental results show that the Random Forest model with topics and emotions outperforms other approaches in accuracy, recall, and F1 score. The findings reveal that topics emphasizing language learning and community interactions, as well as the use of language learning tools and the learning experience, are prominent. Moreover, the emotions of 'admiration' and 'annoyance' emerge as significant factors across all models. This research highlights that incorporating emotion scores into the model and utilizing a broader range of emotion labels enhances model performance.

Detection of Character Emotional Type Based on Classification of Emotional Words at Story (스토리기반 저작물에서 감정어 분류에 기반한 등장인물의 감정 성향 판단)

  • Baek, Yeong Tae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.18 no.9
    • /
    • pp.131-138
    • /
    • 2013
  • In this paper, I propose and evaluate the method that classifies emotional type of characters with their emotional words. Emotional types are classified as three types such as positive, negative and neutral. They are selected by classification of emotional words that characters speak. I propose the method to extract emotional words based on WordNet, and to represent as emotional vector. WordNet is thesaurus of network structure connected by hypernym, hyponym, synonym, antonym, and so on. Emotion word is extracted by calculating its emotional distance to each emotional category. The number of emotional category is 30. Therefore, emotional vector has 30 levels. When all emotional vectors of some character are accumulated, her/his emotion of a movie can be represented as a emotional vector. Also, thirty emotional categories can be classified as three elements of positive, negative, and neutral. As a result, emotion of some character can be represented by values of three elements. The proposed method was evaluated for 12 characters of four movies. Result of evaluation showed the accuracy of 75%.

Implementation of the Speech Emotion Recognition System in the ARM Platform (ARM 플랫폼 기반의 음성 감성인식 시스템 구현)

  • Oh, Sang-Heon;Park, Kyu-Sik
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.11
    • /
    • pp.1530-1537
    • /
    • 2007
  • In this paper, we implemented a speech emotion recognition system that can distinguish human emotional states from recorded speech captured by a single microphone and classify them into four categories: neutrality, happiness, sadness and anger. In general, a speech recorded with a microphone contains background noises due to the speaker environment and the microphone characteristic, which can result in serious system performance degradation. In order to minimize the effect of these noises and to improve the system performance, a MA(Moving Average) filter with a relatively simple structure and low computational complexity was adopted. Then a SFS(Sequential Forward Selection) feature optimization method was implemented to further improve and stabilize the system performance. For speech emotion classification, a SVM pattern classifier is used. The experimental results indicate the emotional classification performance around 65% in the computer simulation and 62% on the ARM platform.

  • PDF