• Title/Summary/Keyword: Emotion Model

Search Result 880, Processing Time 0.033 seconds

Engine of computational Emotion model for emotional interaction with human (인간과 감정적 상호작용을 위한 '감정 엔진')

  • Lee, Yeon Gon
    • Science of Emotion and Sensibility
    • /
    • v.15 no.4
    • /
    • pp.503-516
    • /
    • 2012
  • According to the researches of robot and software agent until now, computational emotion model is dependent on system, so it is hard task that emotion models is separated from existing systems and then recycled into new systems. Therefore, I introduce the Engine of computational Emotion model (shall hereafter appear as EE) to integrate with any robots or agents. This is the engine, ie a software for independent form from inputs and outputs, so the EE is Emotion Generation to control only generation and processing of emotions without both phases of Inputs(Perception) and Outputs(Expression). The EE can be interfaced with any inputs and outputs, and produce emotions from not only emotion itself but also personality and emotions of person. In addition, the EE can be existed in any robot or agent by a kind of software library, or be used as a separate system to communicate. In EE, emotions is the Primary Emotions, ie Joy, Surprise, Disgust, Fear, Sadness, and Anger. It is vector that consist of string and coefficient about emotion, and EE receives this vectors from input interface and then sends its to output interface. In EE, each emotions are connected to lists of emotional experiences, and the lists consisted of string and coefficient of each emotional experiences are used to generate and process emotional states. The emotional experiences are consisted of emotion vocabulary understanding various emotional experiences of human. This study EE is available to use to make interaction products to response the appropriate reaction of human emotions. The significance of the study is on development of a system to induce that person feel that product has your sympathy. Therefore, the EE can help give an efficient service of emotional sympathy to products of HRI, HCI area.

  • PDF

Toward an integrated model of emotion recognition methods based on reviews of previous work (정서 재인 방법 고찰을 통한 통합적 모델 모색에 관한 연구)

  • Park, Mi-Sook;Park, Ji-Eun;Sohn, Jin-Hun
    • Science of Emotion and Sensibility
    • /
    • v.14 no.1
    • /
    • pp.101-116
    • /
    • 2011
  • Current researches on emotion detection classify emotions by using the information from facial, vocal, and bodily expressions, or physiological responses. This study was to review three representative emotion recognition methods, which were based on psychological theory of emotion. Firstly, literature review on the emotion recognition methods based on facial expressions was done. These studies were supported by Darwin's theory. Secondly, review on the emotion recognition methods based on changes in physiology was conducted. These researches were relied on James' theory. Lastly, a review on the emotion recognition was conducted on the basis of multimodality(i.e., combination of signals from face, dialogue, posture, or peripheral nervous system). These studies were supported by both Darwin's and James' theories. In each part, research findings was examined as well as theoretical backgrounds which each method was relied on. This review proposed a need for an integrated model of emotion recognition methods to evolve the way of emotion recognition. The integrated model suggests that emotion recognition methods are needed to include other physiological signals such as brain responses or face temperature. Also, the integrated model proposed that emotion recognition methods are needed to be based on multidimensional model and take consideration of cognitive appraisal factors during emotional experience.

  • PDF

User's Emotion Modeling on Dynamic Narrative Structure : towards of Film and Game (동적 내러티브 구조에 대한 사용자 감정모델링 : 영화와 게임을 중심으로)

  • Kim, Mi-Jin;Kim, Jae-Ho
    • The Journal of the Korea Contents Association
    • /
    • v.12 no.1
    • /
    • pp.103-111
    • /
    • 2012
  • This paper is a basic study for making a system that can predict the success and failure of entertainment contents at the initial stage of production. It proposes the user's emotion modeling of dynamic narrative on entertainment contents. To make this possible, 1) dynamic narrative emotion model is proposed based on theoretical research of narrative structure and cognitive emotion model. 2) configuring the emotion types and emotion value, proposed model of three emotion parameter(desire, expectation, emotion type) are derived. 3)To measure user's emotion in each story event of dynamic narrative, cognitive behavior and description of user(film, game) is established. The earlier studies on the user research of conceptual, analytic approach is aimed of predicting on review of the media and user's attitude, and consequently these results is delineated purely descriptive. In contrast, this paper is proposed the method of user's emotion modeling on dynamic narrative. It would be able to contributed to the emotional evaluation of entertainment contents using specific information.

Emotion Recognition Implementation with Multimodalities of Face, Voice and EEG

  • Udurume, Miracle;Caliwag, Angela;Lim, Wansu;Kim, Gwigon
    • Journal of information and communication convergence engineering
    • /
    • v.20 no.3
    • /
    • pp.174-180
    • /
    • 2022
  • Emotion recognition is an essential component of complete interaction between human and machine. The issues related to emotion recognition are a result of the different types of emotions expressed in several forms such as visual, sound, and physiological signal. Recent advancements in the field show that combined modalities, such as visual, voice and electroencephalography signals, lead to better result compared to the use of single modalities separately. Previous studies have explored the use of multiple modalities for accurate predictions of emotion; however the number of studies regarding real-time implementation is limited because of the difficulty in simultaneously implementing multiple modalities of emotion recognition. In this study, we proposed an emotion recognition system for real-time emotion recognition implementation. Our model was built with a multithreading block that enables the implementation of each modality using separate threads for continuous synchronization. First, we separately achieved emotion recognition for each modality before enabling the use of the multithreaded system. To verify the correctness of the results, we compared the performance accuracy of unimodal and multimodal emotion recognitions in real-time. The experimental results showed real-time user emotion recognition of the proposed model. In addition, the effectiveness of the multimodalities for emotion recognition was observed. Our multimodal model was able to obtain an accuracy of 80.1% as compared to the unimodality, which obtained accuracies of 70.9, 54.3, and 63.1%.

Maximum Entropy-based Emotion Recognition Model using Individual Average Difference (개인별 평균차를 이용한 최대 엔트로피 기반 감성 인식 모델)

  • Park, So-Young;Kim, Dong-Keun;Whang, Min-Cheol
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.14 no.7
    • /
    • pp.1557-1564
    • /
    • 2010
  • In this paper, we propose a maximum entropy-based emotion recognition model using the individual average difference of emotional signal, because an emotional signal pattern depends on each individual. In order to accurately recognize a user's emotion, the proposed model utilizes the difference between the average of the input emotional signals and the average of each emotional state's signals(such as positive emotional signals and negative emotional signals), rather than only the given input signal. With the aim of easily constructing the emotion recognition model without the professional knowledge of the emotion recognition, it utilizes a maximum entropy model, one of the best-performed and well-known machine learning techniques. Considering that it is difficult to obtain enough training data based on the numerical value of emotional signal for machine learning, the proposed model substitutes two simple symbols such as +(positive number)/-(negative number) for every average difference value, and calculates the average of emotional signals per second rather than the total emotion response time(10 seconds).

Designing emotional model and Ontology based on Korean to support extended search of digital music content (디지털 음악 콘텐츠의 확장된 검색을 지원하는 한국어 기반 감성 모델과 온톨로지 설계)

  • Kim, SunKyung;Shin, PanSeop;Lim, HaeChull
    • Journal of the Korea Society of Computer and Information
    • /
    • v.18 no.5
    • /
    • pp.43-52
    • /
    • 2013
  • In recent years, a large amount of music content is distributed in the Internet environment. In order to retrieve the music content effectively that user want, various studies have been carried out. Especially, it is also actively developing music recommendation system combining emotion model with MIR(Music Information Retrieval) studies. However, in these studies, there are several drawbacks. First, structure of emotion model that was used is simple. Second, because the emotion model has not designed for Korean language, there is limit to process the semantic of emotional words expressed with Korean. In this paper, through extending the existing emotion model, we propose a new emotion model KOREM(KORean Emotional Model) based on Korean. And also, we design and implement ontology using emotion model proposed. Through them, sorting, storage and retrieval of music content described with various emotional expression are available.

Development of a Negative Emotion Prediction Model by Cortisol-Hormonal Change During the Biological Classification (생물분류탐구과정에서 호르몬 변화를 이용한 부정감성예측모델 개발)

  • Park, Jin-Sun;Lee, Il-Sun;Lee, Jun-Ki;Kwon, Yongju
    • Journal of Science Education
    • /
    • v.34 no.2
    • /
    • pp.185-192
    • /
    • 2010
  • The purpose of this study was to develope the negative-emotion prediction model by hormonal changes during the scientific inquiry. For this study, biological classification task was developed that are suitable for comprehensive scientific inquiry. Forty-seven 2nd grade secondary school students (boy 18, girl 29) were participated in this study. The students are healthy for measure hormonal changes. The students performed the feathers classification task individually. Before and after the task, the strength of negative emotion was measured using adjective emotion check lists and they extracted their saliva sample for salivary hormone analysis. The results of this study, student's change of negative emotion during the feathers classification process was significant positive correlation(R=0.39, P<0.001) with student's salivary cortisol concentration. According to this results, we developed the negative emotion prediction model by salivary cortisol changes.

  • PDF

Design of Intelligent Emotion Recognition Model (지능형 감정인식 모델설계)

  • 김이곤;김서영;하종필
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2001.12a
    • /
    • pp.46-50
    • /
    • 2001
  • Voice is one of the most efficient communication media and it includes several kinds of factors about speaker, context emotion and so on. Human emotion is expressed in the speech, the gesture, the physiological phenomena (the breath, the beating of the pulse, etc). In this paper, the method to have cognizance of emotion from anyone's voice signals is presented and simulated by using neuro-fuzzy model.

  • PDF

Design of Emotion Recognition Model Using fuzzy Logic (퍼지 로직을 이용한 감정인식 모델설계)

  • 김이곤;배영철
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2000.05a
    • /
    • pp.268-282
    • /
    • 2000
  • Speech is one of the most efficient communication media and it includes several kinds of factors about speaker, context emotion and so on. Human emotion is expressed in the speech, the gesture, the physiological phenomena(the breath, the beating of the pulse, etc). In this paper, the method to have cognizance of emotion from anyone's voice signals is presented and simulated by using neuro-fuzzy model.

  • PDF

Emotion-Based Control

  • Ko, Sung-Bum;Lim, Gi-Young
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 2000.04a
    • /
    • pp.306-311
    • /
    • 2000
  • We, Human beings, use both powers of reason and emotion simultaneously, which surely help us to obtain flexible adaptability against the dynamic environment. We assert that this principle can be applied into the general system. That is, it would be possible to improve the adaptability by covering a digital oriented information processing system with analog oriented emotion layer. In this paper, we proposed a vertical slicing model with an emotion layer in it. And we showed that the emotion-based control allows us to improve the adaptability of a system at least under some conditions.

  • PDF