• Title/Summary/Keyword: 다중 감정

Search Result 176, Processing Time 0.019 seconds

KE-T5-Based Text Emotion Classification in Korean Conversations (KE-T5 기반 한국어 대화 문장 감정 분류)

  • Lim, Yeongbeom;Kim, San;Jang, Jin Yea;Shin, Saim;Jung, Minyoung
    • Annual Conference on Human and Language Technology
    • /
    • 2021.10a
    • /
    • pp.496-497
    • /
    • 2021
  • 감정 분류는 사람의 사고방식이나 행동양식을 구분하기 위한 중요한 열쇠로, 지난 수십 년간 감정 분석과 관련된 다양한 연구가 진행되었다. 감정 분류의 품질과 정확도를 높이기 위한 방법 중 하나로 단일 레이블링 대신 다중 레이블링된 데이터 세트를 감정 분석에 활용하는 연구가 제안되었고, 본 논문에서는 T5 모델을 한국어와 영어 코퍼스로 학습한 KE-T5 모델을 기반으로 한국어 발화 데이터를 단일 레이블링한 경우와 다중 레이블링한 경우의 감정 분류 성능을 비교한 결과 다중 레이블 데이터 세트가 단일 레이블 데이터 세트보다 23.3% 더 높은 정확도를 보임을 확인했다.

  • PDF

Impact Analysis of nonverbal multimodals for recognition of emotion expressed virtual humans (가상 인간의 감정 표현 인식을 위한 비언어적 다중모달 영향 분석)

  • Kim, Jin Ok
    • Journal of Internet Computing and Services
    • /
    • v.13 no.5
    • /
    • pp.9-19
    • /
    • 2012
  • Virtual human used as HCI in digital contents expresses his various emotions across modalities like facial expression and body posture. However, few studies considered combinations of such nonverbal multimodal in emotion perception. Computational engine models have to consider how a combination of nonverbal modal like facial expression and body posture will be perceived by users to implement emotional virtual human, This paper proposes the impacts of nonverbal multimodal in design of emotion expressed virtual human. First, the relative impacts are analysed between different modals by exploring emotion recognition of modalities for virtual human. Then, experiment evaluates the contribution of the facial and postural congruent expressions to recognize basic emotion categories, as well as the valence and activation dimensions. Measurements are carried out to the impact of incongruent expressions of multimodal on the recognition of superposed emotions which are known to be frequent in everyday life. Experimental results show that the congruence of facial and postural expression of virtual human facilitates perception of emotion categories and categorical recognition is influenced by the facial expression modality, furthermore, postural modality are preferred to establish a judgement about level of activation dimension. These results will be used to implementation of animation engine system and behavior syncronization for emotion expressed virtual human.

Multi-Emotion Regression Model for Recognizing Inherent Emotions in Speech Data (음성 데이터의 내재된 감정인식을 위한 다중 감정 회귀 모델)

  • Moung Ho Yi;Myung Jin Lim;Ju Hyun Shin
    • Smart Media Journal
    • /
    • v.12 no.9
    • /
    • pp.81-88
    • /
    • 2023
  • Recently, communication through online is increasing due to the spread of non-face-to-face services due to COVID-19. In non-face-to-face situations, the other person's opinions and emotions are recognized through modalities such as text, speech, and images. Currently, research on multimodal emotion recognition that combines various modalities is actively underway. Among them, emotion recognition using speech data is attracting attention as a means of understanding emotions through sound and language information, but most of the time, emotions are recognized using a single speech feature value. However, because a variety of emotions exist in a complex manner in a conversation, a method for recognizing multiple emotions is needed. Therefore, in this paper, we propose a multi-emotion regression model that extracts feature vectors after preprocessing speech data to recognize complex, inherent emotions and takes into account the passage of time.

Implementation of Multi Channel Network Platform based Augmented Reality Facial Emotion Sticker using Deep Learning (딥러닝을 이용한 증강현실 얼굴감정스티커 기반의 다중채널네트워크 플랫폼 구현)

  • Kim, Dae-Jin
    • Journal of Digital Contents Society
    • /
    • v.19 no.7
    • /
    • pp.1349-1355
    • /
    • 2018
  • Recently, a variety of contents services over the internet are becoming popular, among which MCN(Multi Channel Network) platform services have become popular with the generalization of smart phones. The MCN platform is based on streaming, and various factors are added to improve the service. Among them, augmented reality sticker service using face recognition is widely used. In this paper, we implemented the MCN platform that masks the augmented reality sticker on the face through facial emotion recognition in order to further increase the interest factor. We analyzed seven facial emotions using deep learning technology for facial emotion recognition, and applied the emotional sticker to the face based on it. To implement the proposed MCN platform, emotional stickers were applied to the clients and various servers that can stream the servers were designed.

Multi Emotional Agent based Story Generation (다중 감정 에이전트를 이용한 자동 이야기 생성 시스템의 설계)

  • Kim, Won-Il;Kim, Dong-Hyun;Hong, You-Sik;Kim, Sung-Sik;Lee, Chang-Min
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.45 no.5
    • /
    • pp.134-139
    • /
    • 2008
  • In this paper, we propose a story generation system using multi emotional agents. The proposed multi emotional agents are equipped with multiple emotional model so that it can be used as individually personalized agents that can generates unique storylines. Basically these kinds of multi emotional agents are easily employed as Avatar or NPC in computer games. In the proposed system, emotional agents are used as actor or actress whose characters and preferences are different each other. The storylines generated using the proposed system are realistic since the characters are emotional as humans.

Transformer-based transfer learning and multi-task learning for improving the performance of speech emotion recognition (음성감정인식 성능 향상을 위한 트랜스포머 기반 전이학습 및 다중작업학습)

  • Park, Sunchan;Kim, Hyung Soon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.40 no.5
    • /
    • pp.515-522
    • /
    • 2021
  • It is hard to prepare sufficient training data for speech emotion recognition due to the difficulty of emotion labeling. In this paper, we apply transfer learning with large-scale training data for speech recognition on a transformer-based model to improve the performance of speech emotion recognition. In addition, we propose a method to utilize context information without decoding by multi-task learning with speech recognition. According to the speech emotion recognition experiments using the IEMOCAP dataset, our model achieves a weighted accuracy of 70.6 % and an unweighted accuracy of 71.6 %, which shows that the proposed method is effective in improving the performance of speech emotion recognition.

Enhancing Multimodal Emotion Recognition in Speech and Text with Integrated CNN, LSTM, and BERT Models (통합 CNN, LSTM, 및 BERT 모델 기반의 음성 및 텍스트 다중 모달 감정 인식 연구)

  • Edward Dwijayanto Cahyadi;Hans Nathaniel Hadi Soesilo;Mi-Hwa Song
    • The Journal of the Convergence on Culture Technology
    • /
    • v.10 no.1
    • /
    • pp.617-623
    • /
    • 2024
  • Identifying emotions through speech poses a significant challenge due to the complex relationship between language and emotions. Our paper aims to take on this challenge by employing feature engineering to identify emotions in speech through a multimodal classification task involving both speech and text data. We evaluated two classifiers-Convolutional Neural Networks (CNN) and Long Short-Term Memory (LSTM)-both integrated with a BERT-based pre-trained model. Our assessment covers various performance metrics (accuracy, F-score, precision, and recall) across different experimental setups). The findings highlight the impressive proficiency of two models in accurately discerning emotions from both text and speech data.

Emotional Behavior Decision Method and Its Experiments of Generality for Applying to Various Social Robot Systems (목적과 사양이 다른 다양한 인간 친화 로봇에 적용하기 위한 감성 행동 생성 방법 및 범용성 실험)

  • Ahn, Ho-Seok;Choi, Jin-Young;Lee, Dong-Wook
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.48 no.4
    • /
    • pp.54-62
    • /
    • 2011
  • Emotional reaction should be different from the purpose of the robot system. The method for emotional reaction is also different from the specification of the robot system. Therefore, emotional behavior decision model, which is applied to social robots regardless of specifications and purposes, is necessary. This paper introduces a universal emotional behavior decision model designed for applying to various social robots that have different specifications and purposes. Multiple emotions, a set of probability value of every emotion, are calculated independently and expressed according to the purpose of the robot system. Then, behavior, for emotional reaction according to the calculated multiple emotions, is decided regarding the specification of the robot system. The decided behavior is a combination of unit behaviors that indicates the smallest expressible behaviors in each expression parts. It is possible to express various undefined behaviors by generating unit behavior combinations according to multiple emotions. The universal emotional behavior decision model is applied to three kinds of social robot systems that have different specifications and purposes.

A Study on the Dataset of the Korean Multi-class Emotion Analysis in Radio Listeners' Messages (라디오 청취자 문자 사연을 활용한 한국어 다중 감정 분석용 데이터셋연구)

  • Jaeah, Lee;Gooman, Park
    • Journal of Broadcast Engineering
    • /
    • v.27 no.6
    • /
    • pp.940-943
    • /
    • 2022
  • This study aims to analyze the Korean dataset by performing Korean sentence Emotion Analysis in the radio listeners' text messages collected personally. Currently, in Korea, research on the Emotion Analysis of Korean sentences is variously continuing. However, it is difficult to expect high accuracy of Emotion Analysis due to the linguistic characteristics of Korean. In addition, a lot of research has been done on Binary Sentiment Analysis that allows positive/negative classification only, but Multi-class Emotion Analysis that is classified into three or more emotions requires more research. In this regard, it is necessary to consider and analyze the Korean dataset to increase the accuracy of Multi-class Emotion Analysis for Korean. In this paper, we analyzed why Korean Emotion Analysis is difficult in the process of conducting Emotion Analysis through surveys and experiments, proposed a method for creating a dataset that can improve accuracy and can be used as a basis for Emotion Analysis of Korean sentences.

Emotion Recognition Method from Speech Signal Using the Wavelet Transform (웨이블렛 변환을 이용한 음성에서의 감정 추출 및 인식 기법)

  • Go, Hyoun-Joo;Lee, Dae-Jong;Park, Jang-Hwan;Chun, Myung-Geun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.14 no.2
    • /
    • pp.150-155
    • /
    • 2004
  • In this paper, an emotion recognition method using speech signal is presented. Six basic human emotions including happiness, sadness, anger, surprise, fear and dislike are investigated. The proposed recognizer have each codebook constructed by using the wavelet transform for the emotional state. Here, we first verify the emotional state at each filterbank and then the final recognition is obtained from a multi-decision method scheme. The database consists of 360 emotional utterances from twenty person who talk a sentence three times for six emotional states. The proposed method showed more 5% improvement of the recognition rate than previous works.