• Title/Summary/Keyword: Emotion Model

Search Result 894, Processing Time 0.028 seconds

Emotion - Based Intelligent Model

  • Ko, Sung-Bum;Lim, Gi-Young
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.178.5-178
    • /
    • 2001
  • We, Human beings, use both powers of reason and emotion simultaneously, which surely help us to obtain flexible adaptability against the dynamic environment. We assert that this principle can be applied into the general system. That is, it would be possible to improve the adaptability by covering a digital oriented information processing system with an analog oriented emotion layer. In this paper, we proposed a vertical slicing model with an emotion layer in It. And we showed that the emotion-based control allows us to improve the adaptability of a system at least under some conditions.

  • PDF

The Effect of Brand Evidence on Positive Emotion, Negative Emotion, and Attitude in Restaurant Industry

  • KIM, Eun-Jung
    • The Korean Journal of Franchise Management
    • /
    • v.12 no.1
    • /
    • pp.45-55
    • /
    • 2021
  • Purpose: How to build the positive emotion of customer is very important, because it affects the positive attitude. Brand evidence has a significant impact on consumer behavior in terms of reinforcing consumers' perception of food service companies and differentiating them from competing brands. Thus, this study examines the effect of brand evidence on emotion (positive emotion and negative emotion), and attitude in restaurant industry. Research design, data, and methodology: This study examines the structural relationship among brand evidence, emotion, and attitude. Brand evidence divide into three sub-dimensions such as physical evidence, core service, and employee service. In order to test the purposes of this study, research model and hypotheses were developed. The questionnaire items were modified and used according to the content of this study based on previous studies. All constructs were measured by multiple items tested and developed in the previous research. The data were collected from 439 restaurant users from Seoul area were analyzed using SPSS 22.0 and SmartPLS 3.0 program. A total of 460 questionnaires were distributed and a survey was conducted for 4 weeks, and a total of 439 were used for analysis, excluding non-response data and 21 unusable response data among the collected questionnaires. Frequency analysis was conducted to identify the general characteristics of the survey subjects. To measure the reliability and validity of the measurement tools, confirmatory factor analysis was conducted. Structural model analysis was conducted to verify the research model. Result: The findings demonstrate that physical evidence, core service, employee service had positive effects on positive emotion. And core service and employee service had negative effects on negative emotion while physical evidence did not have. Also, positive emotion had positive effect on attitude and negative emotion had negative effect on attitude. Conclusions: The findings of this study provide guidelines on how to enhance competitiveness in restaurant industry through understanding brand evidence's effects on raising perceived consumer's emotion and attitude. Therefore, food service companies should establish a marketing strategy that can stimulate positive emotions through brand evidence, which is all factors related to service brands that influence consumers' evaluation of service products and purchase decision-making process.

Implementation of Intelligent Virtual Character Based on Reinforcement Learning and Emotion Model (강화학습과 감정모델 기반의 지능적인 가상 캐릭터의 구현)

  • Woo Jong-Ha;Park Jung-Eun;Oh Kyung-Whan
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.16 no.3
    • /
    • pp.259-265
    • /
    • 2006
  • Learning and emotions are very important parts to implement intelligent robots. In this paper, we implement intelligent virtual character based on reinforcement learning which interacts with user and have internal emotion model. Virtual character acts autonomously in 3D virtual environment by internal state. And user can learn virtual character specific behaviors by repeated directions. Mouse gesture is used to perceive such directions based on artificial neural network. Emotion-Mood-Personality model is proposed to express emotions. And we examine the change of emotion and learning behaviors when virtual character interact with user.

Sensory Engineering Model in Search of Emotion-Specific Physiology -An Introduction and Proposal (정서특정적 생리의 탐색을 모색하는 감성공학의 패러다임과 실천방법)

  • 우제린
    • Science of Emotion and Sensibility
    • /
    • v.4 no.2
    • /
    • pp.1-13
    • /
    • 2001
  • Emotion-Specific Physiology may still remain to bean elusive entity even to many of the proponents and seekers, but an ever-growing body of experimental evidence sheds much brighter prospects for the future researches in that direction. Once such Emotion-Physiology pairs are identified, there exist a high hope that some Sense-Friendly Features that are causally related, or highly correlated, to each pair may be identifiable in the nature or man-made objects. On the premise that certain emotions, if and when engendered by a consumer good, may be conducive to an urge “to own or to identify oneself with the product”, presented here is a model of Sensory Engineering that is oriented objectively towards identifying the Emotion-Specific Physiology in order to have the Sense-Friendly Features reproduced in product designs. Relevant and complementary concepts and some suggested procedures in implementing the proposed model are offered.

  • PDF

Convolutional Neural Network Model Using Data Augmentation for Emotion AI-based Recommendation Systems

  • Ho-yeon Park;Kyoung-jae Kim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.12
    • /
    • pp.57-66
    • /
    • 2023
  • In this study, we propose a novel research framework for the recommendation system that can estimate the user's emotional state and reflect it in the recommendation process by applying deep learning techniques and emotion AI (artificial intelligence). To this end, we build an emotion classification model that classifies each of the seven emotions of angry, disgust, fear, happy, sad, surprise, and neutral, respectively, and propose a model that can reflect this result in the recommendation process. However, in the general emotion classification data, the difference in distribution ratio between each label is large, so it may be difficult to expect generalized classification results. In this study, since the number of emotion data such as disgust in emotion image data is often insufficient, correction is made through augmentation. Lastly, we propose a method to reflect the emotion prediction model based on data through image augmentation in the recommendation systems.

Personalized Service Based on Context Awareness through User Emotional Perception in Mobile Environment (모바일 환경에서의 상황인식 기반 사용자 감성인지를 통한 개인화 서비스)

  • Kwon, Il-Kyoung;Lee, Sang-Yong
    • Journal of Digital Convergence
    • /
    • v.10 no.2
    • /
    • pp.287-292
    • /
    • 2012
  • In this paper, user personalized services through the emotion perception required to support location-based sensing data preprocessing techniques and emotion data preprocessing techniques is studied for user's emotion data building and preprocessing in V-A emotion model. For this purpose the granular context tree and string matching based emotion pattern matching techniques are used. In addition, context-aware and personalized recommendation services technique using probabilistic reasoning is studied for personalized services based on context awareness.

Emotion Detection Algorithm Using Frontal Face Image

  • Kim, Moon-Hwan;Joo, Young-Hoon;Park, Jin-Bae
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.2373-2378
    • /
    • 2005
  • An emotion detection algorithm using frontal facial image is presented in this paper. The algorithm is composed of three main stages: image processing stage and facial feature extraction stage, and emotion detection stage. In image processing stage, the face region and facial component is extracted by using fuzzy color filter, virtual face model, and histogram analysis method. The features for emotion detection are extracted from facial component in facial feature extraction stage. In emotion detection stage, the fuzzy classifier is adopted to recognize emotion from extracted features. It is shown by experiment results that the proposed algorithm can detect emotion well.

  • PDF

Design of Reactive Emotion Process for the Service Robot (서비스 로봇을 위한 리액티브 감정 생성 모델)

  • Kim, Hyoung-Rock;Kim, Young-Min;Park, Jong-Chan;Park, Kyung-Sook;Kang, Tae-Woon;Kwon, Dong-Soo
    • The Journal of Korea Robotics Society
    • /
    • v.2 no.2
    • /
    • pp.119-128
    • /
    • 2007
  • Emotion interaction between human and robot is an important element for natural interaction especially for service robot. We propose a hybrid emotion generation architecture and detailed design of reactive process in the architecture based on insight about human emotion system. Reactive emotion generation is to increase task performance and believability of the service robot. Experiment result shows that it seems possible for the reactive process to function for those purposes, and reciprocal interaction between different layers is important for proper functioning of robot's emotion generation system.

  • PDF

Towards a Pedestrian Emotion Model for Navigation Support (내비게이션 지원을 목적으로 한 보행자 감성모델의 구축)

  • Kim, Don-Han
    • Science of Emotion and Sensibility
    • /
    • v.13 no.1
    • /
    • pp.197-206
    • /
    • 2010
  • For an emotion retrieval system implementation to support pedestrian navigation, coordinating the pedestrian emotion model with the system user's emotion is considered a key component. This study proposes a new method for capturing the user's model that corresponds to the pedestrian emotion model and examines the validity of the method. In the first phase, a database comprising a set of interior images that represent hypothetical destinations was developed. In the second phase, 10 subjects were recruited and asked to evaluate on navigation and satisfaction toward each interior image in five rounds of navigation experiments. In the last phase, the subjects' feedback data was used for of the pedestrian emotion model, which is called ‘learning' in this study. After evaluations by the subjects, the learning effect was analyzed by the following aspects: recall ratio, precision ratio, retrieval ranking, and satisfaction. Findings of the analysis verify that all four aspects significantly were improved after the learning. This study demonstrates the effectiveness of the learning algorithm for the proposed pedestrian emotion model. Furthermore, this study demonstrates the potential of such pedestrian emotion model to be well applicable in the development of various mobile contents service systems dealing with visual images such as commercial interiors in the future.

  • PDF

Jointly Image Topic and Emotion Detection using Multi-Modal Hierarchical Latent Dirichlet Allocation

  • Ding, Wanying;Zhu, Junhuan;Guo, Lifan;Hu, Xiaohua;Luo, Jiebo;Wang, Haohong
    • Journal of Multimedia Information System
    • /
    • v.1 no.1
    • /
    • pp.55-67
    • /
    • 2014
  • Image topic and emotion analysis is an important component of online image retrieval, which nowadays has become very popular in the widely growing social media community. However, due to the gaps between images and texts, there is very limited work in literature to detect one image's Topics and Emotions in a unified framework, although topics and emotions are two levels of semantics that often work together to comprehensively describe one image. In this work, a unified model, Joint Topic/Emotion Multi-Modal Hierarchical Latent Dirichlet Allocation (JTE-MMHLDA) model, which extends previous LDA, mmLDA, and JST model to capture topic and emotion information at the same time from heterogeneous data, is proposed. Specifically, a two level graphical structured model is built to realize sharing topics and emotions among the whole document collection. The experimental results on a Flickr dataset indicate that the proposed model efficiently discovers images' topics and emotions, and significantly outperform the text-only system by 4.4%, vision-only system by 18.1% in topic detection, and outperforms the text-only system by 7.1%, vision-only system by 39.7% in emotion detection.

  • PDF