• Title/Summary/Keyword: emotion engineering

Search Result 791, Processing Time 0.034 seconds

Robot's Emotion Generation Model based on Generalized Context Input Variables with Personality and Familiarity (성격과 친밀도를 지닌 로봇의 일반화된 상황 입력에 기반한 감정 생성)

  • Kwon, Dong-Soo;Park, Jong-Chan;Kim, Young-Min;Kim, Hyoung-Rock;Song, Hyunsoo
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.3 no.2
    • /
    • pp.91-101
    • /
    • 2008
  • For a friendly interaction between human and robot, emotional interchange has recently been more important. So many researchers who are investigating the emotion generation model tried to naturalize the robot's emotional state and to improve the usability of the model for the designer of the robot. And also the various emotion generation of the robot is needed to increase the believability of the robot. So in this paper we used the hybrid emotion generation architecture, and defined the generalized context input of emotion generation model for the designer to easily implement it to the robot. And we developed the personality and loyalty model based on the psychology for various emotion generation. Robot's personality is implemented with the emotional stability from Big-Five, and loyalty is made of familiarity generation, expression, and learning procedure which are based on the human-human social relationship such as balance theory and social exchange theory. We verify this emotion generation model by implementing it to the 'user calling and scheduling' scenario.

  • PDF

Transformer-based transfer learning and multi-task learning for improving the performance of speech emotion recognition (음성감정인식 성능 향상을 위한 트랜스포머 기반 전이학습 및 다중작업학습)

  • Park, Sunchan;Kim, Hyung Soon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.40 no.5
    • /
    • pp.515-522
    • /
    • 2021
  • It is hard to prepare sufficient training data for speech emotion recognition due to the difficulty of emotion labeling. In this paper, we apply transfer learning with large-scale training data for speech recognition on a transformer-based model to improve the performance of speech emotion recognition. In addition, we propose a method to utilize context information without decoding by multi-task learning with speech recognition. According to the speech emotion recognition experiments using the IEMOCAP dataset, our model achieves a weighted accuracy of 70.6 % and an unweighted accuracy of 71.6 %, which shows that the proposed method is effective in improving the performance of speech emotion recognition.

Emotion Recognition by Hidden Markov Model at Driving Simulation (자동차 운행 시뮬레이션에서 Hidden Markov Model을 이용한 운전자 감성인식)

  • Park H.H.;Song S.H.;Ji Y.K.;Huh K.S.;Cho D.I.;Park J.H.
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2005.06a
    • /
    • pp.1958-1962
    • /
    • 2005
  • A driver's emotion is a very important factor of safe driving. This paper classified a driver's emotion into 3 major emotions, can be occur when driving a car: Surprise, Joy, Tired. And It evaluated the classifier using Hidden Markov Models, which have observation sequence as bio-signals. It used the 2-D emotional plane to classfiy a human's general emotion state. The 2-D emotional plane has 2 axes of pleasure-displeasure and arsual-relaxztion. The used bio-signals are Galvanic Skin Response(GSR) and Heart Rate Variability(HRV), which are easy to acquire and reliable. We classified several moving pictures into 3 major emotions to evaluate our HMM system. As a result of driving simulations for each emotional situations, we can get recognition rates of 67% for surprise, 58% for joy and 52% for tired.

  • PDF

Application and Analysis of Emotional Attributes using Crowdsourced Method for Hangul Font Recommendation System (한글 글꼴 추천시스템을 위한 크라우드 방식의 감성 속성 적용 및 분석)

  • Kim, Hyun-Young;Lim, Soon-Bum
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.4
    • /
    • pp.704-712
    • /
    • 2017
  • Various researches on content sensibility with the development of digital contents are under way. Emotional research on fonts is also underway in various fields. There is a requirement to use the content expressions in the same way as the content, and to use the font emotion and the textual sensibility of the text in harmony. But it is impossible to select a proper font emotion in Korea because each of more than 6,000 fonts has a certain emotion. In this paper, we analysed emotional classification attributes and constructed the Hangul font recommendation system. Also we verified the credibility and validity of the attributes themselves in order to apply to Korea Hangul fonts. After then, we tested whether general users can find a proper font in a commercial font set through this emotional recommendation system. As a result, when users want to express their emotions in sentences more visually, they can get a recommendation of a Hangul font having a desired emotion by utilizing font-based emotion attribute values collected through the crowdsourced method.

A Study on the Dataset of the Korean Multi-class Emotion Analysis in Radio Listeners' Messages (라디오 청취자 문자 사연을 활용한 한국어 다중 감정 분석용 데이터셋연구)

  • Jaeah, Lee;Gooman, Park
    • Journal of Broadcast Engineering
    • /
    • v.27 no.6
    • /
    • pp.940-943
    • /
    • 2022
  • This study aims to analyze the Korean dataset by performing Korean sentence Emotion Analysis in the radio listeners' text messages collected personally. Currently, in Korea, research on the Emotion Analysis of Korean sentences is variously continuing. However, it is difficult to expect high accuracy of Emotion Analysis due to the linguistic characteristics of Korean. In addition, a lot of research has been done on Binary Sentiment Analysis that allows positive/negative classification only, but Multi-class Emotion Analysis that is classified into three or more emotions requires more research. In this regard, it is necessary to consider and analyze the Korean dataset to increase the accuracy of Multi-class Emotion Analysis for Korean. In this paper, we analyzed why Korean Emotion Analysis is difficult in the process of conducting Emotion Analysis through surveys and experiments, proposed a method for creating a dataset that can improve accuracy and can be used as a basis for Emotion Analysis of Korean sentences.

Research of Real-Time Emotion Recognition Interface Using Multiple Physiological Signals of EEG and ECG (뇌파 및 심전도 복합 생체신호를 이용한 실시간 감정인식 인터페이스 연구)

  • Shin, Dong-Min;Shin, Dong-Il;Shin, Dong-Kyoo
    • Journal of Korea Game Society
    • /
    • v.15 no.2
    • /
    • pp.105-114
    • /
    • 2015
  • We propose a real time user interface that utilizes emotion recognition by physiological signals. To improve the problem that was low accuracy of emotion recognition through the traditional EEG(ElectroEncephaloGram), We developed a physiological signals-based emotion recognition system mixing relative power spectrum values of theta/alpha/beta/gamma EEG waves and autonomic nerve signal ratio of ECG (ElectroCardioGram). We propose both a data map and weight value modification algorithm to recognize six emotions of happy, fear, sad, joy, anger, and hatred. The datamap that stores the user-specific probability value is created and the algorithm updates the weighting to improve the accuracy of emotion recognition corresponding to each EEG channel. Also, as we compared the results of the EEG/ECG bio-singal complex data and single data consisting of EEG, the accuracy went up 23.77%. The proposed interface system with high accuracy will be utillized as a useful interface for controlling the game spaces and smart spaces.