• Title/Summary/Keyword: Dialogue based on Emotions

Search Result 8, Processing Time 0.025 seconds

Emotion-based Real-time Facial Expression Matching Dialogue System for Virtual Human (감정에 기반한 가상인간의 대화 및 표정 실시간 생성 시스템 구현)

  • Kim, Kirak;Yeon, Heeyeon;Eun, Taeyoung;Jung, Moonryul
    • Journal of the Korea Computer Graphics Society
    • /
    • v.28 no.3
    • /
    • pp.23-29
    • /
    • 2022
  • Virtual humans are implemented with dedicated modeling tools like Unity 3D Engine in virtual space (virtual reality, mixed reality, metaverse, etc.). Various human modeling tools have been introduced to implement virtual human-like appearance, voice, expression, and behavior similar to real people, and virtual humans implemented via these tools can communicate with users to some extent. However, most of the virtual humans so far have stayed unimodal using only text or speech. As AI technologies advance, the outdated machine-centered dialogue system is now changing to a human-centered, natural multi-modal system. By using several pre-trained networks, we implemented an emotion-based multi-modal dialogue system, which generates human-like utterances and displays appropriate facial expressions in real-time.

Real-time Background Music System for Immersive Dialogue in Metaverse based on Dialogue Emotion (메타버스 대화의 몰입감 증진을 위한 대화 감정 기반 실시간 배경음악 시스템 구현)

  • Kirak Kim;Sangah Lee;Nahyeon Kim;Moonryul Jung
    • Journal of the Korea Computer Graphics Society
    • /
    • v.29 no.4
    • /
    • pp.1-6
    • /
    • 2023
  • To enhance immersive experiences for metaverse environements, background music is often used. However, the background music is mostly pre-matched and repeated which might occur a distractive experience to users as it does not align well with rapidly changing user-interactive contents. Thus, we implemented a system to provide a more immersive metaverse conversation experience by 1) developing a regression neural network that extracts emotions from an utterance using KEMDy20, the Korean multimodal emotion dataset 2) selecting music corresponding to the extracted emotions from an utterance by the DEAM dataset where music is tagged with arousal-valence levels 3) combining it with a virtual space where users can have a real-time conversation with avatars.

Component Analysis for Constructing an Emotion Ontology (감정 온톨로지의 구축을 위한 구성요소 분석)

  • Yoon, Ae-Sun;Kwon, Hyuk-Chul
    • Korean Journal of Cognitive Science
    • /
    • v.21 no.1
    • /
    • pp.157-175
    • /
    • 2010
  • Understanding dialogue participant's emotion is important as well as decoding the explicit message in human communication. It is well known that non-verbal elements are more suitable for conveying speaker's emotions than verbal elements. Written texts, however, contain a variety of linguistic units that express emotions. This study aims at analyzing components for constructing an emotion ontology, that provides us with numerous applications in Human Language Technology. A majority of the previous work in text-based emotion processing focused on the classification of emotions, the construction of a dictionary describing emotion, and the retrieval of those lexica in texts through keyword spotting and/or syntactic parsing techniques. The retrieved or computed emotions based on that process did not show good results in terms of accuracy. Thus, more sophisticate components analysis is proposed and the linguistic factors are introduced in this study. (1) 5 linguistic types of emotion expressions are differentiated in terms of target (verbal/non-verbal) and the method (expressive/descriptive/iconic). The correlations among them as well as their correlation with the non-verbal expressive type are also determined. This characteristic is expected to guarantees more adaptability to our ontology in multi-modal environments. (2) As emotion-related components, this study proposes 24 emotion types, the 5-scale intensity (-2~+2), and the 3-scale polarity (positive/negative/neutral) which can describe a variety of emotions in more detail and in standardized way. (3) We introduce verbal expression-related components, such as 'experiencer', 'description target', 'description method' and 'linguistic features', which can classify and tag appropriately verbal expressions of emotions. (4) Adopting the linguistic tag sets proposed by ISO and TEI and providing the mapping table between our classification of emotions and Plutchik's, our ontology can be easily employed for multilingual processing.

  • PDF

Speakers' Intention Analysis Based on Partial Learning of a Shared Layer in a Convolutional Neural Network (Convolutional Neural Network에서 공유 계층의 부분 학습에 기반 한 화자 의도 분석)

  • Kim, Minkyoung;Kim, Harksoo
    • Journal of KIISE
    • /
    • v.44 no.12
    • /
    • pp.1252-1257
    • /
    • 2017
  • In dialogues, speakers' intentions can be represented by sets of an emotion, a speech act, and a predicator. Therefore, dialogue systems should capture and process these implied characteristics of utterances. Many previous studies have considered such determination as independent classification problems, but others have showed them to be associated with each other. In this paper, we propose an integrated model that simultaneously determines emotions, speech acts, and predicators using a convolution neural network. The proposed model consists of a particular abstraction layer, mutually independent informations of these characteristics are abstracted. In the shared abstraction layer, combinations of the independent information is abstracted. During training, errors of emotions, errors of speech acts, and errors of predicators are partially back-propagated through the layers. In the experiments, the proposed integrated model showed better performances (2%p in emotion determination, 11%p in speech act determination, and 3%p in predicator determination) than independent determination models.

A Proposal of Smart Speaker Dialogue System Guidelines for the Middle-aged (중년 고령자를 위한 스마트 스피커 대화 체계 가이드라인 제안)

  • Yoon, So-Yeon;Ha, Kwang-Soo
    • The Journal of the Korea Contents Association
    • /
    • v.19 no.9
    • /
    • pp.81-91
    • /
    • 2019
  • Recently, the nation has been suffering from a variety of problems, such as the rapid aging of the population and the weakening of the family's role due to rapid industrialization, such as the problem of supporting the elderly or the decline in the quality of supporting them. Among them, the issue of supporting the sentiment of the elderly is a major issue in terms of the quality of the stimulus. The best solution would be to resolve this issue of emotional support through various physical and human support. However, due to various limitations, access to efficient utilization of resources is being sought, among which support efforts through the convergence of digital technologies need to be noted. In this study, we took note of the problems of aging population shortage due to aging phenomenon and the problems of the emotional side of the problem of declining quality of the service, and analyzed the types of digital technology applied to support the emotional side through the convergence of digital technology. Among them, the Commission proposed emotional support through smart speakers, confirming the possibility of supporting the elderly through smart speakers. In addition, the Commission proposed guidelines for smart speaker communication systems to support the sentiment of older adults by conducting an in-depth interview with the In-Depth interview with the evaluation of the usability of smart speakers for older people. Based on the results of this study, it is expected that it will be the basic data for designing a communication system when developing smart speakers to support the emotions of the elderly.

Toward an integrated model of emotion recognition methods based on reviews of previous work (정서 재인 방법 고찰을 통한 통합적 모델 모색에 관한 연구)

  • Park, Mi-Sook;Park, Ji-Eun;Sohn, Jin-Hun
    • Science of Emotion and Sensibility
    • /
    • v.14 no.1
    • /
    • pp.101-116
    • /
    • 2011
  • Current researches on emotion detection classify emotions by using the information from facial, vocal, and bodily expressions, or physiological responses. This study was to review three representative emotion recognition methods, which were based on psychological theory of emotion. Firstly, literature review on the emotion recognition methods based on facial expressions was done. These studies were supported by Darwin's theory. Secondly, review on the emotion recognition methods based on changes in physiology was conducted. These researches were relied on James' theory. Lastly, a review on the emotion recognition was conducted on the basis of multimodality(i.e., combination of signals from face, dialogue, posture, or peripheral nervous system). These studies were supported by both Darwin's and James' theories. In each part, research findings was examined as well as theoretical backgrounds which each method was relied on. This review proposed a need for an integrated model of emotion recognition methods to evolve the way of emotion recognition. The integrated model suggests that emotion recognition methods are needed to include other physiological signals such as brain responses or face temperature. Also, the integrated model proposed that emotion recognition methods are needed to be based on multidimensional model and take consideration of cognitive appraisal factors during emotional experience.

  • PDF

Audiobook Text Shaping for Synesthesia Voice Training - Focusing on Paralanguages - (오디오북 텍스트 형상화를 위한 공감각적 음성 훈련 연구 - 유사언어를 활용하여 -)

  • Cho, Ye-Shin;Choi, Jae-Oh
    • Journal of Korea Entertainment Industry Association
    • /
    • v.13 no.8
    • /
    • pp.167-180
    • /
    • 2019
  • The purpose of this study is to find out the results of synesthesia speech training using similar language for shaping audiobook text. The audiobook text for training uses Tolstoy's work, and uses similar language of tone, tone, pose, speed, intonation, accent, and expression of emotions. The participants who ten visually impaired trainee in H library were selected for qualitative research. Based on the research questions raised in this study, the results are as follows. First, synesthesia training, in which more than two senses of the five senses work simultaneously in voice training for audio book text shaping, produced the result by visualizing the original purpose, meaning, and background of the text. Second, the use of similar language was helpful in the whole process of expressing the meaning of sentence and dialogue for audiobook text shaping. In addition, although there were some differences among the study subjects, they found commonalities that considered tone, pose, and intonation important. Third, the visually impaired have advanced sensory aspects and memory, which resulted in rapid acquisition of metabolism and acceptance of transmission during training. In addition, the teacher's friendly behavior was a very important key mediator in the training process.

Story-based Information Retrieval (스토리 기반의 정보 검색 연구)

  • You, Eun-Soon;Park, Seung-Bo
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.81-96
    • /
    • 2013
  • Video information retrieval has become a very important issue because of the explosive increase in video data from Web content development. Meanwhile, content-based video analysis using visual features has been the main source for video information retrieval and browsing. Content in video can be represented with content-based analysis techniques, which can extract various features from audio-visual data such as frames, shots, colors, texture, or shape. Moreover, similarity between videos can be measured through content-based analysis. However, a movie that is one of typical types of video data is organized by story as well as audio-visual data. This causes a semantic gap between significant information recognized by people and information resulting from content-based analysis, when content-based video analysis using only audio-visual data of low level is applied to information retrieval of movie. The reason for this semantic gap is that the story line for a movie is high level information, with relationships in the content that changes as the movie progresses. Information retrieval related to the story line of a movie cannot be executed by only content-based analysis techniques. A formal model is needed, which can determine relationships among movie contents, or track meaning changes, in order to accurately retrieve the story information. Recently, story-based video analysis techniques have emerged using a social network concept for story information retrieval. These approaches represent a story by using the relationships between characters in a movie, but these approaches have problems. First, they do not express dynamic changes in relationships between characters according to story development. Second, they miss profound information, such as emotions indicating the identities and psychological states of the characters. Emotion is essential to understanding a character's motivation, conflict, and resolution. Third, they do not take account of events and background that contribute to the story. As a result, this paper reviews the importance and weaknesses of previous video analysis methods ranging from content-based approaches to story analysis based on social network. Also, we suggest necessary elements, such as character, background, and events, based on narrative structures introduced in the literature. We extract characters' emotional words from the script of the movie Pretty Woman by using the hierarchical attribute of WordNet, which is an extensive English thesaurus. WordNet offers relationships between words (e.g., synonyms, hypernyms, hyponyms, antonyms). We present a method to visualize the emotional pattern of a character over time. Second, a character's inner nature must be predetermined in order to model a character arc that can depict the character's growth and development. To this end, we analyze the amount of the character's dialogue in the script and track the character's inner nature using social network concepts, such as in-degree (incoming links) and out-degree (outgoing links). Additionally, we propose a method that can track a character's inner nature by tracing indices such as degree, in-degree, and out-degree of the character network in a movie through its progression. Finally, the spatial background where characters meet and where events take place is an important element in the story. We take advantage of the movie script to extracting significant spatial background and suggest a scene map describing spatial arrangements and distances in the movie. Important places where main characters first meet or where they stay during long periods of time can be extracted through this scene map. In view of the aforementioned three elements (character, event, background), we extract a variety of information related to the story and evaluate the performance of the proposed method. We can track story information extracted over time and detect a change in the character's emotion or inner nature, spatial movement, and conflicts and resolutions in the story.