• Title/Summary/Keyword: Emotion Model

Search Result 880, Processing Time 0.216 seconds

Analyzing and classifying emotional flow of story in emotion dimension space (정서 차원 공간에서 소설의 지배 정서 분석 및 분류)

  • Rhee, Shin-Young;Ham, Jun-Seok;Ko, Il-Ju
    • Korean Journal of Cognitive Science
    • /
    • v.22 no.3
    • /
    • pp.299-326
    • /
    • 2011
  • The text such as stories, blogs, chat, message and reviews have the overall emotional flow. It can be classified to the text having similar emotional flow if we compare the similarity between texts, and it can be used such as recommendations and opinion collection. In this paper, we extract emotion terms from the text sequentially and analysis emotion terms in the pleasantness-unpleasantness and activation dimension in order to identify the emotional flow of the text. To analyze the 'dominant emotion' which is the overall emotional flow in the text, we add the time dimension as sequential flow of the text, and analyze the emotional flow in three dimensional space: pleasantness-unpleasantness, activation and time. Also, we suggested that a classification method to compute similarity of the emotional flow in the text using the Euclidean distance in three dimensional space. With the proposed method, we analyze the dominant emotion in korean modern short stories and classify them to similar dominant emotion.

  • PDF

Emotion Recognition using Robust Speech Recognition System (강인한 음성 인식 시스템을 사용한 감정 인식)

  • Kim, Weon-Goo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.5
    • /
    • pp.586-591
    • /
    • 2008
  • This paper studied the emotion recognition system combined with robust speech recognition system in order to improve the performance of emotion recognition system. For this purpose, the effect of emotional variation on the speech recognition system and robust feature parameters of speech recognition system were studied using speech database containing various emotions. Final emotion recognition is processed using the input utterance and its emotional model according to the result of speech recognition. In the experiment, robust speech recognition system is HMM based speaker independent word recognizer using RASTA mel-cepstral coefficient and its derivatives and cepstral mean subtraction(CMS) as a signal bias removal. Experimental results showed that emotion recognizer combined with speech recognition system showed better performance than emotion recognizer alone.

Effects of Multisensory Cues, Self-Enhancing Imagery and Self Goal-Achievement Emotion on Purchase Intention

  • CHOI, Nak-Hwan;QIAO, Xinxin;WANG, Li
    • The Journal of Asian Finance, Economics and Business
    • /
    • v.7 no.1
    • /
    • pp.141-151
    • /
    • 2020
  • This research aimed at studying the role of self-enhancing imagery and self goal-achievement emotion in the effect of characteristics perceived at advertisements using multisensory cues on purchase intention. Sports shoes advertisement was selected as an empirical research object. Questionnaire survey method was used to collect data. 'WenJuanXing' site was used to make the questionnaire in Chinese, and it was loaded on WeChat and QQ. 260 participants from different regions of China participated in online questionnaire survey. The results of testing the hypotheses by structural equation model in Amos 21.0 program are summarized as followings. The congruency between multisensory cues and self-discrepancy awareness positively evoked the self-enhancing imagery and the self goal-achievement emotion. The object relevance between the consumer and the product advertised did not induce the emotion, but evoked the self-enhancing imagery. Both of the self-enhancing imagery and the self goal-achievement emotion had positive effects on the product purchase intention. When developing advertisement, marketers should focus on multisensory cues' characteristics to enhance the self-enhancing imageries as well as to help feel the goal-achievement emotion. They should pay attention to the ways by which the multisensory cues' characteristics used to develop advertisement can be perceived to be congruent with each other by consumers.

Multimodal Emotion Recognition using Face Image and Speech (얼굴영상과 음성을 이용한 멀티모달 감정인식)

  • Lee, Hyeon Gu;Kim, Dong Ju
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.8 no.1
    • /
    • pp.29-40
    • /
    • 2012
  • A challenging research issue that has been one of growing importance to those working in human-computer interaction are to endow a machine with an emotional intelligence. Thus, emotion recognition technology plays an important role in the research area of human-computer interaction, and it allows a more natural and more human-like communication between human and computer. In this paper, we propose the multimodal emotion recognition system using face and speech to improve recognition performance. The distance measurement of the face-based emotion recognition is calculated by 2D-PCA of MCS-LBP image and nearest neighbor classifier, and also the likelihood measurement is obtained by Gaussian mixture model algorithm based on pitch and mel-frequency cepstral coefficient features in speech-based emotion recognition. The individual matching scores obtained from face and speech are combined using a weighted-summation operation, and the fused-score is utilized to classify the human emotion. Through experimental results, the proposed method exhibits improved recognition accuracy of about 11.25% to 19.75% when compared to the most uni-modal approach. From these results, we confirmed that the proposed approach achieved a significant performance improvement and the proposed method was very effective.

Effects of Performance, Imagery and Regulatory Focus on Customer Engagement

  • Choi, Nak-Hwan;Nguyen, Quynh Mai;Teng, Zhuoqi
    • Journal of Distribution Science
    • /
    • v.17 no.1
    • /
    • pp.57-72
    • /
    • 2019
  • Purpose - Current study aimed at investigating customer experience types (gain vs. loss avoidance performance experience and hedonic vs. reliability imagery experience) and their influences on satisfaction and positive emotion as antecedents of customer engagement. It also explored moderation role of regulatory focus in the influence of each experience type on satisfaction and positive emotion. Research design, data, and methodology - 416 Vietnamese local tourists were selected to test hypotheses by structural equation model in AMOS 21.0. Results - First, customers actually achieving gain or avoiding loss are more satisfied. Second, customers with hedonic and reliability imagery experience feel more positive emotion. Third, both positive emotion and satisfaction have positive influences on customer engagement. Last, regulatory focus moderates the positive effects of either gain or loss avoidance performance experience on satisfaction and also moderates the positive effects of either hedonic or reliability imagery experience on positive emotion. Conclusions - Focusing on both cognitive satisfaction and affective emotion resulted from experience, this study could advance customer engagement theory. Managerially, brand managers should induce gain performance and hedonic imagery experience (loss avoidance performance and reliability imagery experience) from promotion (prevention)-focused customers to enhance their engagement.

The Relationship between Calling and Posttraumatic Growth of the Air Force Pilot - Mediating Effect of Cognitive Emotion Regulation and Moderating Effect of Transformational Leadership - (공군 조종사의 소명의식과 외상 후 성장의 관계 - 인지적 정서조절의 매개효과와 변혁적 리더십의 조절효과 -)

  • Lee, A Ram;Sohn, Young Woo;Seol, Jeong Hoon
    • Journal of the Korean Society for Aviation and Aeronautics
    • /
    • v.29 no.3
    • /
    • pp.1-14
    • /
    • 2021
  • This study examined the mediating role of cognitive emotion regulation in the relationship between calling and posttraumatic growth (PTG) and the moderating role of transformational leadership among Air Force pilots. A total of 215 ROK Air Force pilots participated in this study twice with an interval of 4 weeks. The results of this study were as follows. First, calling, transformational leadership, adaptive emotion regulation, and PTG showed statistically significant correlations. Second, a mediating model showed that the relationship between calling and PTG was mediated by adaptive emotion regulation. Third, the moderation effect of transformational leadership in the relationship calling on adaptive emotion regulation was found. Finally, transformational leadership also moderated the mediating effect of calling on PTG through adaptive emotion regulation was identified. Implications, limitations, and future research suggestions were discussed.

An Artificial Emotion Model for Expression of Game Character (감정요소가 적용된 게임 캐릭터의 표현을 위한 인공감정 모델)

  • Kim, Ki-Il;Yoon, Jin-Hong;Park, Pyoung-Sun;Kim, Mi-Jin
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02b
    • /
    • pp.411-416
    • /
    • 2008
  • The development of games has brought about the birth of game characters that are visually very realistic. At present, one sees much enthusiasm for giving the characters emotions through such devices as avatars and emoticons. However, in a freely changing environment of games, the devices merely allow for the expression of the value derived from a first input rather than creating expressions of emotion that actively respond to their surroundings. As such, there are as of yet no displays of deep emotions among game characters. In light of this, the present article proposes the 'CROSS(Character Reaction on Specific Situation) Model AE Engine' for game characters in order to develop characters that will actively express action and emotion within the environment of the changing face of games. This is accomplished by classifying the emotional components applicable to game characters based on the OCC model, which is one of the most well known cognitive psychological models. Then, the situation of game playing analysis of the commercialized RPG game is systematized by ontology.

  • PDF

Multi-Emotion Recognition Model with Text and Speech Ensemble (텍스트와 음성의 앙상블을 통한 다중 감정인식 모델)

  • Yi, Moung Ho;Lim, Myoung Jin;Shin, Ju Hyun
    • Smart Media Journal
    • /
    • v.11 no.8
    • /
    • pp.65-72
    • /
    • 2022
  • Due to COVID-19, the importance of non-face-to-face counseling is increasing as the face-to-face counseling method has progressed to non-face-to-face counseling. The advantage of non-face-to-face counseling is that it can be consulted online anytime, anywhere and is safe from COVID-19. However, it is difficult to understand the client's mind because it is difficult to communicate with non-verbal expressions. Therefore, it is important to recognize emotions by accurately analyzing text and voice in order to understand the client's mind well during non-face-to-face counseling. Therefore, in this paper, text data is vectorized using FastText after separating consonants, and voice data is vectorized by extracting features using Log Mel Spectrogram and MFCC respectively. We propose a multi-emotion recognition model that recognizes five emotions using vectorized data using an LSTM model. Multi-emotion recognition is calculated using RMSE. As a result of the experiment, the RMSE of the proposed model was 0.2174, which was the lowest error compared to the model using text and voice data, respectively.

Sex differences of children's facial expression discrimination based on two-dimensional model of emotion (정서의 이차원모델에서 아동의 얼굴표정 변별에서 성 차이)

  • Shin, Young-Suk
    • Korean Journal of Cognitive Science
    • /
    • v.21 no.1
    • /
    • pp.127-143
    • /
    • 2010
  • This study explores children's sex differences of emotion discrimination from facial expressions based on two dimensional model of emotion. The study group consisted of 92 children, of 40, 52, and 64 months of age, and the rate of male and female children was male children (50%) and female children (50%). Children of 92 were required to choose facial expressions related the twelve emotion terms. Facial expressions applied for experiment are used the photographs rated the degree of expression in each of the two dimensions (pleasure-displeasure dimension and arousal-sleep dimension) on a nine-point scale from 54 university students. The experimental findings appeared that the sex differences were distinctly the arousal-sleep dimension than the pleasure-displeasure dimension. In the arousal-sleep dimensionoussleepness, anger, comfort, and loneliness' emotions showed large sex differences over 1 value. Especially, while male children showed high arousal more than female children in the emotions like 'sleepiness, anger and loneliness', female children showed high arousal more than male children in 'comfort' emotion.

  • PDF

Development of a driver's emotion detection model using auto-encoder on driving behavior and psychological data

  • Eun-Seo, Jung;Seo-Hee, Kim;Yun-Jung, Hong;In-Beom, Yang;Jiyoung, Woo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.3
    • /
    • pp.35-43
    • /
    • 2023
  • Emotion recognition while driving is an essential task to prevent accidents. Furthermore, in the era of autonomous driving, automobiles are the subject of mobility, requiring more emotional communication with drivers, and the emotion recognition market is gradually spreading. Accordingly, in this research plan, the driver's emotions are classified into seven categories using psychological and behavioral data, which are relatively easy to collect. The latent vectors extracted through the auto-encoder model were also used as features in this classification model, confirming that this affected performance improvement. Furthermore, it also confirmed that the performance was improved when using the framework presented in this paper compared to when the existing EEG data were included. Finally, 81% of the driver's emotion classification accuracy and 80% of F1-Score were achieved only through psychological, personal information, and behavioral data.