• Title/Summary/Keyword: Emotion Model

Search Result 876, Processing Time 0.033 seconds

Behavior Decision Model Based on Emotion and Dynamic Personality

  • Yu, Chan-Woo;Choi, Jin-Young
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.101-106
    • /
    • 2005
  • In this paper, we propose a behavior decision model for a robot, which is based on artificial emotion, various motivations and dynamic personality. Our goal is making a robot which can express its emotion human-like way. To achieve this goal, we applied several emotion and personality theories in psychology. Especially, we introduced the concept of dynamic personality model for a robot. Drawing on this concept, we could make a behavior decision model so that the emotion expression of the robot has adaptability to various environments through interactions between human and the robot.

  • PDF

An Emotion Processing Model using Multiple Valued Logic Functions (다치 논리함수를 이용한 감성처리 모델)

  • Chung, Hwan-Mook
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.1
    • /
    • pp.13-18
    • /
    • 2009
  • Usually, human emotions are vague and change diversely on the basis of the stimulus from the outside. Plutchik classified the fundamental behavioral patterns into eight patterns, named each of them a genuine emotion, and furthermore suggested mixed emotions using a combination of genuine emotions. In this paper, we propose a method for processing Plutchik's emotion model using Multiple Valued Logic(MVL) Automata Model which utilizes the properties of difference in Multiple Valued Logic functions. This proposed emotion processing model can be widely applied to the analysis and processing of emotion data.

Robot behavior decision based on Motivation and Hierarchicalized Emotions

  • Ahn, Hyoung-Chul;Park, Myoung-Soo;Choi, Jin-Young
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1776-1780
    • /
    • 2004
  • In this paper, we propose the new emotion model and the robot behavior decision model based on proposed emotion model. As like in human, emotions are hierarchicalized in four levels (momentary emotions, mood, attitude, and personality) and are determined from the robot behavior and human responses. They are combined with motivation (which is determined from the external stimuli) to determine the robot behavior.

  • PDF

A study of customer's emotional change by the ways of presenting pictures of clothing at online shops (온라인 쇼핑몰에서 상품 표현방식에 따른 감성변화에 관한 연구)

  • Park, Seong-Jong;Seok, Hyeon-Jeong
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 2008.10a
    • /
    • pp.74-77
    • /
    • 2008
  • Online shoppers are not able to try clothing on. Therefore, the pictures of clothing on the website play a significant role when shoppers make decision on their purchase. There are generally three different ways to show clothing at online shops. The first one is showing only clothing images, and the second one is showing the pictures that have actual fitting models wearing clothing on (In this case, Model's face is mostly not shown in the picture.), and the third is showing the pictures of professional fitting models who wear goods. The shopping malls adopt each of the different ways but little is known about affect on purchasing from these three ways. The aim of this study is to figure out how the online shopper's emotional status is affected by these three ways of presenting pictures of clothing. At first, we developed a set of adjective words of human emotion to set up the evaluation criteria for user's emotional status. Those adjectives are originally from the precedent research on human emotion. To cut 99 adjectives down to a proper number for the criteria, we conducted a preliminary survey, and finally, 5 adjectives are selected as appropriate criteria for evaluating users' emotional status while they are shopping. Those five adjectives are 'possess','sensual', 'unique', 'tasteful', and 'stylish'. Then, we conducted the main survey showing 10 kinds of cloth (each cloth was consist of 3 ways). And in the page of model images, we measured the model's preference for understanding the relation with customer's emotion criteria of the product. As a result of the test there was statistically significant difference between product only images and anonymous images, but there was no significant difference between anonymous images and model images. And the preference of the model and value of the emotion criteria have large correlation except 'unique' criteria. It is expected that the result in this study will help to build new marketing strategy which satisfy customers' emotion.

  • PDF

A Study on Sentiment Pattern Analysis of Video Viewers and Predicting Interest in Video using Facial Emotion Recognition (얼굴 감정을 이용한 시청자 감정 패턴 분석 및 흥미도 예측 연구)

  • Jo, In Gu;Kong, Younwoo;Jeon, Soyi;Cho, Seoyeong;Lee, DoHoon
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.2
    • /
    • pp.215-220
    • /
    • 2022
  • Emotion recognition is one of the most important and challenging areas of computer vision. Nowadays, many studies on emotion recognition were conducted and the performance of models is also improving. but, more research is needed on emotion recognition and sentiment analysis of video viewers. In this paper, we propose an emotion analysis system the includes a sentiment analysis model and an interest prediction model. We analyzed the emotional patterns of people watching popular and unpopular videos and predicted the level of interest using the emotion analysis system. Experimental results showed that certain emotions were strongly related to the popularity of videos and the interest prediction model had high accuracy in predicting the level of interest.

Validity analysis of the social emotion model based on relation types in SNS (SNS 사용자의 관계유형에 따른 사회감성 모델의 타당화 분석)

  • Cha, Ye-Sool;Kim, Ji-Hye;Kim, Jong-Hwa;Kim, Song-Yi;Kim, Dong-Keun;Whang, Min-Cheol
    • Science of Emotion and Sensibility
    • /
    • v.15 no.2
    • /
    • pp.283-296
    • /
    • 2012
  • The goal of this study is to determine the social emotion model as an emotion sharing relationship and information sharing relationship based on the user's relations at social networking services. 26 social emotions were extracted by verification of compliance among 92 different emotions collected from the literature survey. The survey on the 26 emotion words was verified to the similarity of social relation types to the Likert 7-points scale. The principal component analysis of the survey data determined 12 representative social emotions in the emotion sharing relation and 13 representative social emotions in the information sharing relation. Multidimensional scaling developed the two-dimensional social emotion model of emotion sharing relation and of information sharing relation based on online communication environment. Meanwhile, insignificant factors in the suggest social emotion models were removed by the structural equation modeling analysis, statistically. The test result of validity analysis demonstrated the fitness of social emotion models at emotion sharing relationships (CFI: .887, TLI: .885, RMSEA: .094), social emotion model of information sharing relationships (CFI: .917, TLI: .900, RMSEA : 0.050). In conclusion, this study presents two different social emotion models based on two different relation types. The findings of this study will provide not only a reference of evaluating social emotions in designing social networking services but also a direction of improving social emotions.

  • PDF

Speech emotion recognition based on genetic algorithm-decision tree fusion of deep and acoustic features

  • Sun, Linhui;Li, Qiu;Fu, Sheng;Li, Pingan
    • ETRI Journal
    • /
    • v.44 no.3
    • /
    • pp.462-475
    • /
    • 2022
  • Although researchers have proposed numerous techniques for speech emotion recognition, its performance remains unsatisfactory in many application scenarios. In this study, we propose a speech emotion recognition model based on a genetic algorithm (GA)-decision tree (DT) fusion of deep and acoustic features. To more comprehensively express speech emotional information, first, frame-level deep and acoustic features are extracted from a speech signal. Next, five kinds of statistic variables of these features are calculated to obtain utterance-level features. The Fisher feature selection criterion is employed to select high-performance features, removing redundant information. In the feature fusion stage, the GA is is used to adaptively search for the best feature fusion weight. Finally, using the fused feature, the proposed speech emotion recognition model based on a DT support vector machine model is realized. Experimental results on the Berlin speech emotion database and the Chinese emotion speech database indicate that the proposed model outperforms an average weight fusion method.

Attention-based CNN-BiGRU for Bengali Music Emotion Classification

  • Subhasish Ghosh;Omar Faruk Riad
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.9
    • /
    • pp.47-54
    • /
    • 2023
  • For Bengali music emotion classification, deep learning models, particularly CNN and RNN are frequently used. But previous researches had the flaws of low accuracy and overfitting problem. In this research, attention-based Conv1D and BiGRU model is designed for music emotion classification and comparative experimentation shows that the proposed model is classifying emotions more accurate. We have proposed a Conv1D and Bi-GRU with the attention-based model for emotion classification of our Bengali music dataset. The model integrates attention-based. Wav preprocessing makes use of MFCCs. To reduce the dimensionality of the feature space, contextual features were extracted from two Conv1D layers. In order to solve the overfitting problems, dropouts are utilized. Two bidirectional GRUs networks are used to update previous and future emotion representation of the output from the Conv1D layers. Two BiGRU layers are conntected to an attention mechanism to give various MFCC feature vectors more attention. Moreover, the attention mechanism has increased the accuracy of the proposed classification model. The vector is finally classified into four emotion classes: Angry, Happy, Relax, Sad; using a dense, fully connected layer with softmax activation. The proposed Conv1D+BiGRU+Attention model is efficient at classifying emotions in the Bengali music dataset than baseline methods. For our Bengali music dataset, the performance of our proposed model is 95%.

Design of an Artificial Emotion Model (인공 감정 모델의 설계)

  • Lee, In-K.;Seo, Suk-T.;Jeong, Hye-C.;Kwon, Soon-H.
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.17 no.5
    • /
    • pp.648-653
    • /
    • 2007
  • Researches on artificial emotion which generates emotion artificially from various external excitations imitating human emotion has been initiated in recent years. But the conventional studies in which the emotion state is changed exponentially or linearly by external emotion excitation have a drawback that the variation of emotion state is changed rapidly and abruptly. In this paper, we propose an artificial emotion generation model which reflects not only strength and frequency of external emotion excitations but also period of it in the emotion state and represents the emotion state with a sigmoid curve w.r.t. time. And we propose an artificial emotion system which generates emotion at the situation of no external emotional excitations through recollection of past emotional excitations, and show its effectiveness through computer simulation results.

ME-based Emotion Recognition Model (ME 기반 감성 인식 모델)

  • Park, So-Young;Kim, Dong-Geun;Whang, Min-Cheol
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2010.05a
    • /
    • pp.985-987
    • /
    • 2010
  • In this paper, we propose a maximum entropy-based emotion recognition model using individual average difference. In order to accurately recognize an user' s emotion, the proposed model utilizes the difference between the average of the given input physiological signals and the average of each emotion state' signals rather than only the input signal. For the purpose of alleviating data sparse -ness, the proposed model substitutes two simple symbols such as +(positive number)/-(negative number) for every average difference value, and calculates the average of physiological signals based on a second rather than the longer total emotion response time. With the aim of easily constructing the model, it utilizes a simple average difference calculation technique and a maximum entropy model, one of well-known machine learning techniques.

  • PDF