• Title/Summary/Keyword: Emotional Engine

Search Result 26, Processing Time 0.027 seconds

Impact Analysis of nonverbal multimodals for recognition of emotion expressed virtual humans (가상 인간의 감정 표현 인식을 위한 비언어적 다중모달 영향 분석)

  • Kim, Jin Ok
    • Journal of Internet Computing and Services
    • /
    • v.13 no.5
    • /
    • pp.9-19
    • /
    • 2012
  • Virtual human used as HCI in digital contents expresses his various emotions across modalities like facial expression and body posture. However, few studies considered combinations of such nonverbal multimodal in emotion perception. Computational engine models have to consider how a combination of nonverbal modal like facial expression and body posture will be perceived by users to implement emotional virtual human, This paper proposes the impacts of nonverbal multimodal in design of emotion expressed virtual human. First, the relative impacts are analysed between different modals by exploring emotion recognition of modalities for virtual human. Then, experiment evaluates the contribution of the facial and postural congruent expressions to recognize basic emotion categories, as well as the valence and activation dimensions. Measurements are carried out to the impact of incongruent expressions of multimodal on the recognition of superposed emotions which are known to be frequent in everyday life. Experimental results show that the congruence of facial and postural expression of virtual human facilitates perception of emotion categories and categorical recognition is influenced by the facial expression modality, furthermore, postural modality are preferred to establish a judgement about level of activation dimension. These results will be used to implementation of animation engine system and behavior syncronization for emotion expressed virtual human.

The Implementation of Visual Effects on Physical Phenomena of Nature Using Particle System (파티클 시스템을 이용한 자연의 물리적 현상의 비주얼 효과 구현)

  • Kim, Kyoung-Nam;Lee, Myoun-Jae
    • Journal of Digital Convergence
    • /
    • v.10 no.4
    • /
    • pp.347-352
    • /
    • 2012
  • Uncertain physical phenomena of nature are a frequently researched area in emotional engineering technology and visual expression of art. This paper suggests possibility for implementation of physical phenomena (percolation, dispersion, and flow) of nature using Unity 3D engine's particle system, which have already been analyzed in a previous study [1] on modern paintings that emphasized physical properties. This paper proposes an easy implementation method for uncertain physical phenomena of nature for artists experiencing difficulty in acquisition of knowledge on computer graphics programming, providing an idea for engineers conducting research on emotion-based technology.

Mobile Internet News Consumption: An Analysis of News Preferences and News Values

  • Pae, Jung Kun;Seol, Jinah
    • Journal of Internet Computing and Services
    • /
    • v.19 no.2
    • /
    • pp.49-56
    • /
    • 2018
  • Internet news consumption is rapidly growing in Korea, and majority of that is being done through Naver, Korea's primary search engine. Naver is also the go-to search engine for smartphone use. This study analyzed 824 most popular news accessed via mobile gears; the news items were selected from Naver's 'Daily Top 10 Stories,' dating from March 2016 to December 2016. The results indicate that entertainment news were the most viewed, while political and social issue news were the most liked and commented by mobile users. With regard to news value, 'prominence' and 'impact' were the two most important factors that influenced a user's news selection process in a mobile environment. The degree of a news' 'prominence' was the most important factor that determined the number of views, while 'impact' was critical to determining "the most commented-upon" and "the most liked" news. The results also indicate that mobile news consumers prefer more dramatic storylines and events that incite public anger or grief, threaten the safety of citizens, or evoke emotional sympathy rather than 'hard news' about such subjects as politics and economics.

Emotion Based Gesture Animation Generation Mobile System (감정 기반 모바일 손제스쳐 애니메이션 제작 시스템)

  • Lee, Jung-Suk;Byun, Hae-Won
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.129-134
    • /
    • 2009
  • Recently, percentage of people who use SMS service is increasing. However, it is difficult to express own complicated emotion with text and emoticon of exited SMS service. This paper focuses on that point and practical uses character animation to express emotion and nuance correctly, funny. Also this paper suggests emotion based gesture animation generation system that use character's facial expression and gesture to delivery emotion excitably and clearly than only speaking. Michel[1] investigated interview movies of a person whose gesturing style they wish to animate and suggested gesture generation graph for stylized gesture animation. In this paper, we make focus to analyze and abstracted emotional gestures of Disney animation characters and did 3D modeling of these emotional gestures expanding Michel[1]'s research. To express emotion of person, suggests a emotion gesture generation graph that reflects emotion flow graph express emotion flow for probability. We investigated user reaction for research the propriety of suggested system and alternation propriety.

  • PDF

Emotional System Applied to Android Robot for Human-friendly Interaction (인간 친화적 상호작용을 위한 안드로이드 로봇의 감성 시스템)

  • Lee, Tae-Geun;Lee, Dong-Uk;So, Byeong-Rok;Lee, Ho-Gil
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2007.04a
    • /
    • pp.95-98
    • /
    • 2007
  • 본 논문은 한국생산기술연구원에서 개발된 안드로이드 로봇(EveR Series) 플랫폼에 적용된 감성 시스템에 관한 내용을 제시한다. EveR 플랫폼은 얼굴 표정, 제스처, 음성합성을 수행 할 수 있는 플랫폼으로써 감성 시스템을 적용하여 인간 친화적인 상호작용을 원활하게 한다. 감성 시스템은 로봇에 동기를 부여하는 동기 모듈(Motivation Module), 다양한 감정들을 가지고 있는 감정 모듈(Emotion Module), 감정들, 제스처, 음성에 영향을 미치는 성격 모듈(Personality Module), 입력 받은 자극들과 상황들에 가중치를 결정하는 기억 모듈(Memory Module)로 구성되어 있다. 감성 시스템은 입력으로 음성, 텍스트, 비전, 촉각 및 상황 정보가 들어오고 감정의 선택과 가중치, 행동, 제스처를 출력하여 인간과의 대화에 있어서 자연스러움을 유도한다.

  • PDF

Speech Emotion Recognition with SVM, KNN and DSVM

  • Hadhami Aouani ;Yassine Ben Ayed
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.8
    • /
    • pp.40-48
    • /
    • 2023
  • Speech Emotions recognition has become the active research theme in speech processing and in applications based on human-machine interaction. In this work, our system is a two-stage approach, namely feature extraction and classification engine. Firstly, two sets of feature are investigated which are: the first one is extracting only 13 Mel-frequency Cepstral Coefficient (MFCC) from emotional speech samples and the second one is applying features fusions between the three features: Zero Crossing Rate (ZCR), Teager Energy Operator (TEO), and Harmonic to Noise Rate (HNR) and MFCC features. Secondly, we use two types of classification techniques which are: the Support Vector Machines (SVM) and the k-Nearest Neighbor (k-NN) to show the performance between them. Besides that, we investigate the importance of the recent advances in machine learning including the deep kernel learning. A large set of experiments are conducted on Surrey Audio-Visual Expressed Emotion (SAVEE) dataset for seven emotions. The results of our experiments showed given good accuracy compared with the previous studies.

The Implementation and Analysis of Facial Expression Customization for a Social Robot (소셜 로봇의 표정 커스터마이징 구현 및 분석)

  • Jiyeon Lee;Haeun Park;Temirlan Dzhoroev;Byounghern Kim;Hui Sung Lee
    • The Journal of Korea Robotics Society
    • /
    • v.18 no.2
    • /
    • pp.203-215
    • /
    • 2023
  • Social robots, which are mainly used by individuals, emphasize the importance of human-robot relationships (HRR) more compared to other types of robots. Emotional expression in robots is one of the key factors that imbue HRR with value; emotions are mainly expressed through the face. However, because of cultural and preference differences, the desired robot facial expressions differ subtly depending on the user. It was expected that a robot facial expression customization tool may mitigate such difficulties and consequently improve HRR. To prove this, we created a robot facial expression customization tool and a prototype robot. We implemented a suitable emotion engine for generating robot facial expressions in a dynamic human-robot interaction setting. We conducted experiments and the users agreed that the availability of a customized version of the robot has a more positive effect on HRR than a predefined version of the robot. Moreover, we suggest recommendations for future improvements of the customization process of robot facial expression.

A Study on Infra-Technology of RCP Interaction System

  • Kim, Seung-Woo;Choe, Jae-Il
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1121-1125
    • /
    • 2004
  • The RT(Robot Technology) has been developed as the next generation of a future technology. According to the 2002 technical report from Mitsubishi R&D center, IT(Information Technology) and RT(Robotic Technology) fusion system will grow five times larger than the current IT market at the year 2015. Moreover, a recent IEEE report predicts that most people will have a robot in the next ten years. RCP(Robotic Cellular Phone), CP(Cellular Phone) having personal robot services, will be an intermediate hi-tech personal machine between one CP a person and one robot a person generations. RCP infra consists of $RCP^{Mobility}$, $RCP^{Interaction}$, $RCP^{Integration}$ technologies. For $RCP^{Mobility}$, human-friendly motion automation and personal service with walking and arming ability are developed. $RCP^{Interaction}$ ability is achieved by modeling an emotion-generating engine and $RCP^{Integration}$ that recognizes environmental and self conditions is developed. By joining intelligent algorithms and CP communication network with the three base modules, a RCP system is constructed. Especially, the RCP interaction system is really focused in this paper. The $RCP^{interaction}$(Robotic Cellular Phone for Interaction) is to be developed as an emotional model CP as shown in figure 1. $RCP^{interaction}$ refers to the sensitivity expression and the link technology of communication of the CP. It is interface technology between human and CP through various emotional models. The interactive emotion functions are designed through differing patterns of vibrator beat frequencies and a feeling system created by a smell injection switching control. As the music influences a person, one can feel a variety of emotion from the vibrator's beats, by converting musical chord frequencies into vibrator beat frequencies. So, this paper presents the definition, the basic theory and experiment results of the RCP interaction system. We confirm a good performance of the RCP interaction system through the experiment results.

  • PDF

An Artificial Emotion Model for Expression of Game Character (감정요소가 적용된 게임 캐릭터의 표현을 위한 인공감정 모델)

  • Kim, Ki-Il;Yoon, Jin-Hong;Park, Pyoung-Sun;Kim, Mi-Jin
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02b
    • /
    • pp.411-416
    • /
    • 2008
  • The development of games has brought about the birth of game characters that are visually very realistic. At present, one sees much enthusiasm for giving the characters emotions through such devices as avatars and emoticons. However, in a freely changing environment of games, the devices merely allow for the expression of the value derived from a first input rather than creating expressions of emotion that actively respond to their surroundings. As such, there are as of yet no displays of deep emotions among game characters. In light of this, the present article proposes the 'CROSS(Character Reaction on Specific Situation) Model AE Engine' for game characters in order to develop characters that will actively express action and emotion within the environment of the changing face of games. This is accomplished by classifying the emotional components applicable to game characters based on the OCC model, which is one of the most well known cognitive psychological models. Then, the situation of game playing analysis of the commercialized RPG game is systematized by ontology.

  • PDF

Mean Shift Clustering을 이용한 영상 검색결과 개선

  • Kwon, Kyung-Su;Shin, Yun-Hee;Kim, Young-Rae;Kim, Eun-Yi
    • Proceedings of the Korea Society for Industrial Systems Conference
    • /
    • 2009.05a
    • /
    • pp.138-143
    • /
    • 2009
  • 본 논문에서는 감성 공간에서 mean shift clustering과 user feedback을 이용하여 영상 검색 결과를 개선하기 위한 시스템을 제안한다. 제안된 시스템은 사용자 인터페이스, 감성 공간 변환, 검색결과 순위 재지정(re-ranking)으로 구성된다. 사용자 인터페이스는 텍스트 형태의 질의 입력과 감성 어휘 선택에 따른 user feedback에 의해 개선된 검색결과를 보인다. 사용된 감성 어휘는 고바야시가 정의한 romantic, natural, casual, elegant, chic, classic, dandy, modern 등의 8개 어휘를 사용한다. 감성 공간 변환 단계에서는 입력된 질의에 따라 웹 영상 검색 엔진(Yahoo)에 의해 검색된 결과 영상들에 대해 컬러와 패턴정보의 특징을 추출하고, 이를 입력으로 하는 8개의 각 감성별 분류기에 의해 각 영상은 8차원 감성 공간으로의 특징 벡터로 변환된다. 이때 감성 공간으로 변환된 특징 벡터들은 mean shift clustering을 통해 군집화 되고, 그 결과로써 대표 클러스터를 찾게 된다. 검색결과 순위 재지정 단계에서는 user feedback 유무에 따라 대표 클러스터의 평균 벡터와 user feedback에 의해 생성된 사용자 감성 벡터에 의해 검색 결과를 개선할 수 있다. 이때 각 기준에 따라 유사도가 결정되고 검색결과 순위가 재지정 된다 제안된 시스템의 성능을 검증하기 위해 7개의 질의의 각 400장, 총 2,800장에 대한 Yahoo 검색 결과와 제안된 시스템을 개선된 검색 결과를 비교하였다.

  • PDF