• Title/Summary/Keyword: Emotion Model

Search Result 880, Processing Time 0.03 seconds

The Influence of Parental Meta-Emotion Philosophy on Children's Social Competence: The Mediating Effect of Children's Emotion Regulation (부모상위정서철학이 학령기 아동의 사회적 유능성에 미치는 영향: 아동의 정서조절능력의 매개효과 검증)

  • Won, Sookyeon;Song, Hana
    • Korean Journal of Child Studies
    • /
    • v.36 no.2
    • /
    • pp.167-182
    • /
    • 2015
  • This study created a structural model of the influence of paternal and maternal meta-emotion philosophy and children's emotion regulation in terms of their social competence and confirmed the nature of the relationship among the variables. For the purpose of this study, data was collected, targeting 363 children in the $5^{th}$ and $6^{th}$ elementary school grades from schools located in Seoul. The main results of this study were as follows: First, both paternal and maternal meta-emotion philosophy had an influence on children's emotion regulation and emotion dysregulation. Next, paternal and maternal meta-emotion philosophy did not appear to have a significant influence on children's social competence in a direct manner. The complete mediation effect of emotion regulation in regards to the influence of paternal and maternal meta-emotion philosophy upon children's social competence was confirmed. It was also found that parental meta-emotion philosophy had an influence upon children's social competence in an indirect manner through children's emotion regulation in the period of middle childhood.

Emotional Model via Human Psychological Test and Its Application to Image Retrieval (인간심리를 이용한 감성 모델과 영상검색에의 적용)

  • Yoo, Hun-Woo;Jang, Dong-Sik
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.31 no.1
    • /
    • pp.68-78
    • /
    • 2005
  • A new emotion-based image retrieval method is proposed in this paper. The research was motivated by Soen's evaluation of human emotion on color patterns. Thirteen pairs of adjective words expressing emotion pairs such as like-dislike, beautiful-ugly, natural-unnatural, dynamic-static, warm-cold, gay-sober, cheerful-dismal, unstablestable, light-dark, strong-weak, gaudy-plain, hard-soft, heavy-light are modeled by 19-dimensional color array and $4{\times}3$ gray matrix in off-line. Once the query is presented in text format, emotion model-based query formulation produces the associated color array and gray matrix. Then, images related to the query are retrieved from the database based on the multiplication of color array and gray matrix, each of which is extracted from query and database image. Experiments over 450 images showed an average retrieval rate of 0.61 for the use of color array alone and an average retrieval rate of 0.47 for the use of gray matrix alone.

Proposal of 2D Mood Model for Human-like Behaviors of Robot (로봇의 인간과 유사한 행동을 위한 2차원 무드 모델 제안)

  • Kim, Won-Hwa;Park, Jeong-Woo;Kim, Woo-Hyun;Lee, Won-Hyong;Chung, Myung-Jin
    • The Journal of Korea Robotics Society
    • /
    • v.5 no.3
    • /
    • pp.224-230
    • /
    • 2010
  • As robots are no longer just working labors in the industrial fields, but stepping into the human's daily lives, interaction and communication between human and robot is becoming essential. For this social interaction with humans, emotion generation of a robot has become necessary, which is a result of very complicated process. Concept of mood has been considered in psychology society as a factor that effects on emotion generation, which is similar to emotion but not the same. In this paper, mood factors for robot considering not only the conditions of the robot itself but also the circumstances of the robot are listed, chosen and finally considered as elements defining a 2-dimensional mood space. Moreover, architecture that combines the proposed mood model and a emotion generation module is given at the end.

Research on GUI(Graphic User Interaction) factors of touch phone by two dimensional emotion model for Grooming users (Grooming 사용자의 2차원 감성 모델링에 의한 터치폰의 GUI 요소에 대한 연구)

  • Kim, Ji-Hye;Hwang, Min-Cheol;Kim, Jong-Hwa;U, Jin-Cheol;Kim, Chi-Jung;Kim, Yong-U;Park, Yeong-Chung;Jeong, Gwang-Mo
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 2009.05a
    • /
    • pp.55-58
    • /
    • 2009
  • 본 연구는 주관적인 사용자의 감성을 객관적으로 정의하여 2차원 감성 모델에 의한 터치폰의 GUI 디자인 요소에 대한 디자인 가이드라인을 제시하고자 한다. 본 연구는 다음과 같은 단계로 연구를 진행하였다. 첫 번째 단계로 그루밍(Grooming) 사용자들의 라이프 스타일을 조사하여 Norman(2002)에 의거한 감각적, 행태적, 그리고 심볼적 세 가지 레벨의 감성요소를 추출하였다. 두 번째 단계로 Russell(1980)의 28개 감성 어휘와 세 단계 감성과의 관계성을 설문하여 감성모델을 구현하였다. 마지막으로 요인분석을 이용하여 대표 감성 어휘를 도출한 후 감성적 터치폰의 GUI(Graphic User Interaction) 디자인 요소를 제시함으로써 사용자의 감성이 반영된 인간 중심적인 제품 디자인을 위한 가이드라인을 제안한다.

  • PDF

Speech Emotion Recognition Using 2D-CNN with Mel-Frequency Cepstrum Coefficients

  • Eom, Youngsik;Bang, Junseong
    • Journal of information and communication convergence engineering
    • /
    • v.19 no.3
    • /
    • pp.148-154
    • /
    • 2021
  • With the advent of context-aware computing, many attempts were made to understand emotions. Among these various attempts, Speech Emotion Recognition (SER) is a method of recognizing the speaker's emotions through speech information. The SER is successful in selecting distinctive 'features' and 'classifying' them in an appropriate way. In this paper, the performances of SER using neural network models (e.g., fully connected network (FCN), convolutional neural network (CNN)) with Mel-Frequency Cepstral Coefficients (MFCC) are examined in terms of the accuracy and distribution of emotion recognition. For Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) dataset, by tuning model parameters, a two-dimensional Convolutional Neural Network (2D-CNN) model with MFCC showed the best performance with an average accuracy of 88.54% for 5 emotions, anger, happiness, calm, fear, and sadness, of men and women. In addition, by examining the distribution of emotion recognition accuracies for neural network models, the 2D-CNN with MFCC can expect an overall accuracy of 75% or more.

The Structural Relationship between Physical Surroundings, Employee Service, Customer Emotion, and Service Loyalty -A Focus on Upscale Restaurants- (업스케일 레스토랑의 물리적 환경과 인적 서비스, 고객의 감정적 반응 및 서비스 충성도간의 구조적 관계)

  • Kim, Ju-Yeon;Lee, Young-Nam
    • Journal of the East Asian Society of Dietary Life
    • /
    • v.17 no.5
    • /
    • pp.753-763
    • /
    • 2007
  • While the cognitive aspects of customer behavior have been a main subject of research, some researchers are now focusing on the emotional aspects. The influence of emotion on attitude and judgement has been accepted by many researchers, and most studies regarding emotion have focused on physical surroundings and emotional responses, based on Mehrabian & Russell's 1974 model. This study aimed to expand the scope of the model by including employee service. Here we examined the structural relationships between the physical surroundings and employee service of upscale restaurants, along with emotional response, and service loyalty. Physical surroundings and employee service were used as single factors, and we composed four different emotional responses: positive, negative, positive arousal, and negative arousal. While physical surroundings had impact on 'positive emotion' and 'positive arousal', Employee service had influence on 'negative emotion', and 'negative arousal' as well as 'positive emotion'. And 'positive emotion' and 'positive arousal' influenced service loyalty. Lastly, there was also a correlation between physical surroundings and employee service.

  • PDF

A Study on Emotion Recognition Systems based on the Probabilistic Relational Model Between Facial Expressions and Physiological Responses (생리적 내재반응 및 얼굴표정 간 확률 관계 모델 기반의 감정인식 시스템에 관한 연구)

  • Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.6
    • /
    • pp.513-519
    • /
    • 2013
  • The current vision-based approaches for emotion recognition, such as facial expression analysis, have many technical limitations in real circumstances, and are not suitable for applications that use them solely in practical environments. In this paper, we propose an approach for emotion recognition by combining extrinsic representations and intrinsic activities among the natural responses of humans which are given specific imuli for inducing emotional states. The intrinsic activities can be used to compensate the uncertainty of extrinsic representations of emotional states. This combination is done by using PRMs (Probabilistic Relational Models) which are extent version of bayesian networks and are learned by greedy-search algorithms and expectation-maximization algorithms. Previous research of facial expression-related extrinsic emotion features and physiological signal-based intrinsic emotion features are combined into the attributes of the PRMs in the emotion recognition domain. The maximum likelihood estimation with the given dependency structure and estimated parameter set is used to classify the label of the target emotional states.

Transformer-based transfer learning and multi-task learning for improving the performance of speech emotion recognition (음성감정인식 성능 향상을 위한 트랜스포머 기반 전이학습 및 다중작업학습)

  • Park, Sunchan;Kim, Hyung Soon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.40 no.5
    • /
    • pp.515-522
    • /
    • 2021
  • It is hard to prepare sufficient training data for speech emotion recognition due to the difficulty of emotion labeling. In this paper, we apply transfer learning with large-scale training data for speech recognition on a transformer-based model to improve the performance of speech emotion recognition. In addition, we propose a method to utilize context information without decoding by multi-task learning with speech recognition. According to the speech emotion recognition experiments using the IEMOCAP dataset, our model achieves a weighted accuracy of 70.6 % and an unweighted accuracy of 71.6 %, which shows that the proposed method is effective in improving the performance of speech emotion recognition.

Multi-Dimensional Emotion Recognition Model of Counseling Chatbot (상담 챗봇의 다차원 감정 인식 모델)

  • Lim, Myung Jin;Yi, Moung Ho;Shin, Ju Hyun
    • Smart Media Journal
    • /
    • v.10 no.4
    • /
    • pp.21-27
    • /
    • 2021
  • Recently, the importance of counseling is increasing due to the Corona Blue caused by COVID-19. Also, with the increase of non-face-to-face services, researches on chatbots that have changed the counseling media are being actively conducted. In non-face-to-face counseling through chatbot, it is most important to accurately understand the client's emotions. However, since there is a limit to recognizing emotions only in sentences written by the client, it is necessary to recognize the dimensional emotions embedded in the sentences for more accurate emotion recognition. Therefore, in this paper, the vector and sentence VAD (Valence, Arousal, Dominance) generated by learning the Word2Vec model after correcting the original data according to the characteristics of the data are learned using a deep learning algorithm to learn the multi-dimensional We propose an emotion recognition model. As a result of comparing three deep learning models as a method to verify the usefulness of the proposed model, R-squared showed the best performance with 0.8484 when the attention model is used.

Implementing an Adaptive Neuro-Fuzzy Model for Emotion Prediction Based on Heart Rate Variability(HRV) (심박변이도를 이용한 적응적 뉴로 퍼지 감정예측 모형에 관한 연구)

  • Park, Sung Soo;Lee, Kun Chang
    • Journal of Digital Convergence
    • /
    • v.17 no.1
    • /
    • pp.239-247
    • /
    • 2019
  • An accurate prediction of emotion is a very important issue for the sake of patient-centered medical device development and emotion-related psychology fields. Although there have been many studies on emotion prediction, no studies have applied the heart rate variability and neuro-fuzzy approach to emotion prediction. We propose ANFEP(Adaptive Neuro Fuzzy System for Emotion Prediction) HRV. The ANFEP bases its core functions on an ANFIS(Adaptive Neuro-Fuzzy Inference System) which integrates neural networks with fuzzy systems as a vehicle for training predictive models. To prove the proposed model, 50 participants were invited to join the experiment and Heart rate variability was obtained and used to input the ANFEP model. The ANFEP model with STDRR and RMSSD as inputs and two membership functions per input variable showed the best results. The result out of applying the ANFEP to the HRV metrics proved to be significantly robust when compared with benchmarking methods like linear regression, support vector regression, neural network, and random forest. The results show that reliable prediction of emotion is possible with less input and it is necessary to develop a more accurate and reliable emotion recognition system.