• Title/Summary/Keyword: emotion engineering

Search Result 791, Processing Time 0.026 seconds

SYMMER: A Systematic Approach to Multiple Musical Emotion Recognition

  • Lee, Jae-Sung;Jo, Jin-Hyuk;Lee, Jae-Joon;Kim, Dae-Won
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.11 no.2
    • /
    • pp.124-128
    • /
    • 2011
  • Music emotion recognition is currently one of the most attractive research areas in music information retrieval. In order to use emotion as clues when searching for a particular music, several music based emotion recognizing systems are fundamentally utilized. In order to maximize user satisfaction, the recognition accuracy is very important. In this paper, we develop a new music emotion recognition system, which employs a multilabel feature selector and multilabel classifier. The performance of the proposed system is demonstrated using novel musical emotion data.

Emotion Recognition Implementation with Multimodalities of Face, Voice and EEG

  • Udurume, Miracle;Caliwag, Angela;Lim, Wansu;Kim, Gwigon
    • Journal of information and communication convergence engineering
    • /
    • v.20 no.3
    • /
    • pp.174-180
    • /
    • 2022
  • Emotion recognition is an essential component of complete interaction between human and machine. The issues related to emotion recognition are a result of the different types of emotions expressed in several forms such as visual, sound, and physiological signal. Recent advancements in the field show that combined modalities, such as visual, voice and electroencephalography signals, lead to better result compared to the use of single modalities separately. Previous studies have explored the use of multiple modalities for accurate predictions of emotion; however the number of studies regarding real-time implementation is limited because of the difficulty in simultaneously implementing multiple modalities of emotion recognition. In this study, we proposed an emotion recognition system for real-time emotion recognition implementation. Our model was built with a multithreading block that enables the implementation of each modality using separate threads for continuous synchronization. First, we separately achieved emotion recognition for each modality before enabling the use of the multithreaded system. To verify the correctness of the results, we compared the performance accuracy of unimodal and multimodal emotion recognitions in real-time. The experimental results showed real-time user emotion recognition of the proposed model. In addition, the effectiveness of the multimodalities for emotion recognition was observed. Our multimodal model was able to obtain an accuracy of 80.1% as compared to the unimodality, which obtained accuracies of 70.9, 54.3, and 63.1%.

Use of Word Clustering to Improve Emotion Recognition from Short Text

  • Yuan, Shuai;Huang, Huan;Wu, Linjing
    • Journal of Computing Science and Engineering
    • /
    • v.10 no.4
    • /
    • pp.103-110
    • /
    • 2016
  • Emotion recognition is an important component of affective computing, and is significant in the implementation of natural and friendly human-computer interaction. An effective approach to recognizing emotion from text is based on a machine learning technique, which deals with emotion recognition as a classification problem. However, in emotion recognition, the texts involved are usually very short, leaving a very large, sparse feature space, which decreases the performance of emotion classification. This paper proposes to resolve the problem of feature sparseness, and largely improve the emotion recognition performance from short texts by doing the following: representing short texts with word cluster features, offering a novel word clustering algorithm, and using a new feature weighting scheme. Emotion classification experiments were performed with different features and weighting schemes on a publicly available dataset. The experimental results suggest that the word cluster features and the proposed weighting scheme can partly resolve problems with feature sparseness and emotion recognition performance.

Design of Hybrid Unsupervised-Supervised Classifier for Automatic Emotion Recognition (자동 감성 인식을 위한 비교사-교사 분류기의 복합 설계)

  • Lee, JeeEun;Yoo, Sun K.
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.63 no.9
    • /
    • pp.1294-1299
    • /
    • 2014
  • The emotion is deeply affected by human behavior and cognitive process, so it is important to do research about the emotion. However, the emotion is ambiguous to clarify because of different ways of life pattern depending on each individual characteristics. To solve this problem, we use not only physiological signal for objective analysis but also hybrid unsupervised-supervised learning classifier for automatic emotion detection. The hybrid emotion classifier is composed of K-means, genetic algorithm and support vector machine. We acquire four different kinds of physiological signal including electroencephalography(EEG), electrocardiography(ECG), galvanic skin response(GSR) and skin temperature(SKT) as well as we use 15 features extracted to be used for hybrid emotion classifier. As a result, hybrid emotion classifier(80.6%) shows better performance than SVM(31.3%).

Neural-network based Computerized Emotion Analysis using Multiple Biological Signals (다중 생체신호를 이용한 신경망 기반 전산화 감정해석)

  • Lee, Jee-Eun;Kim, Byeong-Nam;Yoo, Sun-Kook
    • Science of Emotion and Sensibility
    • /
    • v.20 no.2
    • /
    • pp.161-170
    • /
    • 2017
  • Emotion affects many parts of human life such as learning ability, behavior and judgment. It is important to understand human nature. Emotion can only be inferred from facial expressions or gestures, what it actually is. In particular, emotion is difficult to classify not only because individuals feel differently about emotion but also because visually induced emotion does not sustain during whole testing period. To solve the problem, we acquired bio-signals and extracted features from those signals, which offer objective information about emotion stimulus. The emotion pattern classifier was composed of unsupervised learning algorithm with hidden nodes and feature vectors. Restricted Boltzmann machine (RBM) based on probability estimation was used in the unsupervised learning and maps emotion features to transformed dimensions. The emotion was characterized by non-linear classifiers with hidden nodes of a multi layer neural network, named deep belief network (DBN). The accuracy of DBN (about 94 %) was better than that of back-propagation neural network (about 40 %). The DBN showed good performance as the emotion pattern classifier.

The Influence of Engineering Students' Emotional Regulation Strategies on Interpersonal Conflict Coping Strategies (공과대학생의 정서조절전략이 대인관계 갈등대처전략에 미치는 영향)

  • Choi, Jung Ah
    • Journal of Engineering Education Research
    • /
    • v.27 no.1
    • /
    • pp.50-62
    • /
    • 2024
  • This study examined how emotion regulation strategies specifically function in the interpersonal conflict coping strategies of engineering students. For this purpose, a interpersonal conflict coping strategies and emotion regulation strategies scale was used for 548 engineering students. Multiple regression analysis was conducted. Among the emotion regulation strategies, the "return to body" strategy was related to understanding, validation, focusing, and the "stop action" strategy. In particular, the "stop action" strategy was closely related only to the "return to body" strategy. Among interpersonal conflict coping strategies, the dominating strategy used both positive emotion regulation strategies, such as high refocus on planning, and negative emotion regulation strategies, such as other-blame. Additionally, among negative conflict coping strategies, it was confirmed that both aggression and negative emotional expression, which seem to have similar attributes, share a common feature of having high difficulty in emotional clarity. However, in the case of negative emotional expression, it is characterized by a lack of putting into perspective and high other-blame. On the other hand, the agression strategy seemed to have different characteristics, such as high self-blame and low return to body. By investigating the relationship between interpersonal conflict coping strategies and specific emotion regulation strategies, this study provides implications for education and intervention on which specific emotion regulation strategies need to be cultivated for engineering students to improve their interpersonal conflict resolution capabilities.

Emotion sharing system of RESTful-based using emotion information and location information of the users (사용자의 위치정보와 감성정보를 이용한 RESTful방식의 감성공유 시스템)

  • Jung, Junho;Kim, Dong Keun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.1
    • /
    • pp.162-168
    • /
    • 2014
  • In this study, we propose a emotion sharing system that is sharing users emotion change according to the location of the user where users was shared his emotion information and to the emotion. The system consists of a emotion sharing server and mobile smartphone apps. Emotion smartphone app represent status of emotion and location of users who wants to share emotion at map services based the Google Map API. Emotion sharing server was implemented using a RESTful way to allow emotion sharing between different variety platform besides mobile platforms. Emotion information that is exchanged on a emotion sharing server is stored in an XML fromat. We were confirm emotion sharing system that it could be sharing moving emotion change according to the user's location through map service.

A Study on the Relationship between Color and Cardiovascular Parameters (색채 감성에 대한 심혈관 변수 관계성에 대한 연구)

  • Cho, Ayoung;Woo, Jincheol;Lee, Hyunwoo;Jo, Youngho;Whang, Mincheol
    • Science of Emotion and Sensibility
    • /
    • v.20 no.4
    • /
    • pp.127-134
    • /
    • 2017
  • Color is a significant factor for evoking human emotion. Therefore, the effects of color have been analyzed to predict and evaluate human emotion. The purpose of this study was to measure the cardiovascular responses depending on color stimuli in order to observe differences in color-emotions. Images consisting of six colors (red, green, blue, cyan, magenta, yellow) were used as visual stimuli. 26 college or graduate students (13 males) watched the color stimuli on the monitor and scored their subjective emotion while electrocardiogram (ECG) was meausred. The effects of the color on emotion were tested using Kruskal-Wallis test and Mann-Whitney U test. The coherence ratio showed significant differences between green and magenta (p = .004), green and red (p = .006), and green and yellow (p = .004). The significant differences of cardiovascular and emotions were relevant to emotional valence. This study shows significance as an empirical study by indicating that green induces pleasant and red induces unpleasant.

Emotion Recognition Method Based on Multimodal Sensor Fusion Algorithm

  • Moon, Byung-Hyun;Sim, Kwee-Bo
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.8 no.2
    • /
    • pp.105-110
    • /
    • 2008
  • Human being recognizes emotion fusing information of the other speech signal, expression, gesture and bio-signal. Computer needs technologies that being recognized as human do using combined information. In this paper, we recognized five emotions (normal, happiness, anger, surprise, sadness) through speech signal and facial image, and we propose to method that fusing into emotion for emotion recognition result is applying to multimodal method. Speech signal and facial image does emotion recognition using Principal Component Analysis (PCA) method. And multimodal is fusing into emotion result applying fuzzy membership function. With our experiments, our average emotion recognition rate was 63% by using speech signals, and was 53.4% by using facial images. That is, we know that speech signal offers a better emotion recognition rate than the facial image. We proposed decision fusion method using S-type membership function to heighten the emotion recognition rate. Result of emotion recognition through proposed method, average recognized rate is 70.4%. We could know that decision fusion method offers a better emotion recognition rate than the facial image or speech signal.

Emotion Adjustment Method for Diverse Expressions of Same Emotion Depending on Each Character's Characteristics (캐릭터 성격에 따른 동일 감정 표현의 다양화를 위한 감정 조정 방안)

  • Lee, Chang-Sook;Um, Ky-Hyun;Cho, Kyung-Eun
    • Journal of Korea Game Society
    • /
    • v.10 no.2
    • /
    • pp.37-47
    • /
    • 2010
  • Along with language, emotion is an effective means of expression. By expressing our emotions as well as speaking language, we can deliver our message better. Because each person expresses the same emotion differently, this expression is a useful gauge to measure an individual personality. To avoid monotonous emotional expression from virtual characters, therefore, it is necessary to adjust the creation and deletion of the same emotion depending on each character's personality. This paper has attempted to define personality characteristics that have an impact on each emotion and propose a method to adjust the emotions. Furthermore, the relationship between particular emotion and personality characteristics has been defined by matching the significance of specified personality characteristics with the lexical meaning. In addition, using the Raw Score, the weighted value which is necessary for the adjustment, continuance and deletion of each emotion has been defined. Then, emotion was properly adjusted. When the same emotion was adjusted using actual personality test data, different results have been observed by personality. This paper has been conducted using NEO Personality Inventory (NEO-PI) which consisted of 5 broad domains and 30 sub domains.