• Title/Summary/Keyword: Facial Emotions

Search Result 159, Processing Time 0.026 seconds

A Study on Lip Sync and Facial Expression Development in Low Polygon Character Animation (로우폴리곤 캐릭터 애니메이션에서 립싱크 및 표정 개발 연구)

  • Ji-Won Seo;Hyun-Soo Lee;Min-Ha Kim;Jung-Yi Kim
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.4
    • /
    • pp.409-414
    • /
    • 2023
  • We described how to implement character expressions and animations that play an important role in expressing emotions and personalities in low-polygon character animation. With the development of the video industry, character expressions and mouth-shaped lip-syncing in animation can realize natural movements at a level close to real life. However, for non-experts, it is difficult to use expert-level advanced technology. Therefore, We aimed to present a guide for low-budget low-polygon character animators or non-experts to create mouth-shaped lip-syncing more naturally using accessible and highly usable features. A total of 8 mouth shapes were developed for mouth shape lip-sync animation: 'ㅏ', 'ㅔ', 'ㅣ', 'ㅗ', 'ㅜ', 'ㅡ', 'ㅓ' and a mouth shape that expresses a labial consonant. In the case of facial expression animation, a total of nine animations were produced by adding highly utilized interest, boredom, and pain to the six basic human emotions classified by Paul Ekman: surprise, fear, disgust, anger, happiness, and sadness. This study is meaningful in that it makes it easy to produce natural animation using the features built into the modeling program without using complex technologies or programs.

Emotional Expression System Based on Dynamic Emotion Space (동적 감성 공간에 기반한 감성 표현 시스템)

  • Sim Kwee-Bo;Byun Kwang-Sub;Park Chang-Hyun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.15 no.1
    • /
    • pp.18-23
    • /
    • 2005
  • It is difficult to define and classify human emotion. These vague human emotion appear not in single emotion, but in combination of various emotion. And among them, a remarkable emotion is expressed. This paper proposes a emotional expression algorithm using dynamic emotion space, which give facial expression in similar with vague human emotion. While existing avatar express several predefined emotions from database, our emotion expression system can give unlimited various facial expression by expressing emotion based on dynamically changed emotion space. In order to see whether our system practically give complex and various human expression, we perform real implementation and experiment and verify the efficacy of emotional expression system based on dynamic emotion space.

GA-optimized Support Vector Regression for an Improved Emotional State Estimation Model

  • Ahn, Hyunchul;Kim, Seongjin;Kim, Jae Kyeong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.6
    • /
    • pp.2056-2069
    • /
    • 2014
  • In order to implement interactive and personalized Web services properly, it is necessary to understand the tangible and intangible responses of the users and to recognize their emotional states. Recently, some studies have attempted to build emotional state estimation models based on facial expressions. Most of these studies have applied multiple regression analysis (MRA), artificial neural network (ANN), and support vector regression (SVR) as the prediction algorithm, but the prediction accuracies have been relatively low. In order to improve the prediction performance of the emotion prediction model, we propose a novel SVR model that is optimized using a genetic algorithm (GA). Our proposed algorithm-GASVR-is designed to optimize the kernel parameters and the feature subsets of SVRs in order to predict the levels of two aspects-valence and arousal-of the emotions of the users. In order to validate the usefulness of GASVR, we collected a real-world data set of facial responses and emotional states via a survey. We applied GASVR and other algorithms including MRA, ANN, and conventional SVR to the data set. Finally, we found that GASVR outperformed all of the comparative algorithms in the prediction of the valence and arousal levels.

Differentiation of Facial EMG Responses Induced by Positive and Negative Emotions in Children (긍정정서와 부정정서에 따른 아동의 안면근육반응 차이)

  • Jang Eun-Hye;Lim Hye-Jin;Lee Young-Chang;Chung Soon-Cheol;Sohn Jin-Hun
    • Science of Emotion and Sensibility
    • /
    • v.8 no.2
    • /
    • pp.161-167
    • /
    • 2005
  • The study is to examine how facial EMG responses change when children experience a positive emotion(happiness) and a negative emotion(fear). It is to prove that the positive emotion(happiness) could be distinguishable from the negative emotion(fear) by the EMG responses. Audiovisual film clips were used for evoking the positive emotion(happiness) and the negative emotion(fear). 47 children (11-13 years old, 23 boys and 24 girls) participated in the study Facial EMG (right corrugator and orbicularis oris) was measured while children were experiencing the positive or negative emotion. Emotional assessment scale was used for measuring children's psychological responses. It showed more than $85\%$ appropriateness and 3.15, 4.04 effectiveness (5 scale) for happiness and fear, respectively. Facial EMG responses were significantly different between a resting state and a emotional state both in happiness and in fear (p<001). Result suggests that each emotion was distinguishable by corrugator and orbicularis oris responses. Specifically, corrugator was more activated in the positive emotion(happiness) than in the negative emotion(fear), whereas orbicularis oris was more activated in the negative emotion(fear) than in the positive emotion(fear).

  • PDF

A Preliminary Study of Attentional Blink of Rapid Serial Visual Presentation in Burn Patients with Posttraumatic Stress Disorder (화상 환자에서 신속 순차 시각 제시를 이용한 주의깜빡임에 관한 예비연구)

  • Kim, Dae Hee;Jun, Bora;Seo, Cheong Hoon;Cho, Yongsuk;Yim, Haejun;Hur, Jun;Kim, Dohern;Chun, Wook;Kim, Jonghyun;Jung, Myung Hun;Choi, Ihngeun;Lee, Boung Chul
    • Korean Journal of Biological Psychiatry
    • /
    • v.17 no.2
    • /
    • pp.79-85
    • /
    • 2010
  • Objectives : Trauma patients have attentional bias which enforces traumatic memories and causes cognitive errors. Understanding of such selective attention may explain many aspects of the posttraumatic stress disorder(PTSD) symptoms. Methods : We used the rapid serial visual presentation(RSVP) method to verify attentional blink in burn patients with PTSD. International affective picture system(IAPS) was used as stimuli and distracters. In the 'neutral test', patients have been presented series of pictures with human face picture as target stimuli. Each picture had 100ms interval. However the distance between target facial pictures was randomized and recognition of second facial picture accuracy was measured. In the 'stress test', the first target was stress picture which arouses patient emotions instead of the facial picture. Neutral and Stress tests were done with seven PTSD patients and 20 controls. In '85ms test' the interval was reduced to 85ms. The accuracy of recognition of second target facial picture was rated in all three tests. Eighty-five ms study was done with eighteen PTSD patients. Results : Attentional blinks were observed in 100-400ms of RSVP. PTSD patients showed increased recognition rate in the 'stress test' compared with the 'neutral test'. When presentation interval was decreased to 85 ms, PTSD patient showed decrease of attentional blink effect when target facial picture interval was 170ms. Conclusion : We found attentional blink effect could be affected by stress stimulus in burn patients. And attentional blink may be affected by stimulus interval and the character of stimulus. There may be some other specific mechanism related with selective attention in attentional blink especially with facial picture processing.

Enhancing e-Learning Interactivity vla Emotion Recognition Computing Technology (감성 인식 컴퓨팅 기술을 적용한 이러닝 상호작용 기술 연구)

  • Park, Jung-Hyun;Kim, InOk;Jung, SangMok;Song, Ki-Sang;Kim, JongBaek
    • The Journal of Korean Association of Computer Education
    • /
    • v.11 no.2
    • /
    • pp.89-98
    • /
    • 2008
  • Providing appropriate interactions between learner and e- Learning system is an essential factor of a successful e-Learning system. Although many interaction functions are employed in multimedia Web-based Instruction content, learner experience a lack of similar feedbacks from educators in real- time due to the limitation of Human-Computer Interaction techniques. In this paper, an emotion recognition system via learner facial expressions has been developed and applied to a tutoring system. As human educators do, the system observes learners' emotions from facial expressions and provides any or all pertinent feedback. And various feedbacks can bring to motivations and get rid of isolation from e-Learning environments by oneself. The test results showed that this system may provide significant improvement in terms of interesting and educational achievement.

  • PDF

Development of Content for the Robot that Relieves Depression in the Elderly Using Music Therapy (음악요법을 이용한 노인의 우울증 완화 로봇 'BOOGI'의 콘텐츠 개발)

  • Jung, Yu-Hwa;Jeong, Seong-Won
    • The Journal of the Korea Contents Association
    • /
    • v.15 no.2
    • /
    • pp.74-85
    • /
    • 2015
  • The positive effect of percussion instruments can induce increases in self-esteem and decreases in depression in the elderly. Based on this, the content for a percussion instrument robot that the elderly can use to play music is developed. The elements of the interaction between the elderly and the robot through the robot content are extracted. Music that arouses positive memories in the elderly is selected as part of the music therapy robot content in order to relieve depression, and a scoring system for playing music is constructed. In addition, the interaction components of the robot's facial expressions, which stimulate emotions and sensitivity in the elderly, are also designed. These components enable the elderly to take an active part in using the instrument to change the robot's facial expressions, which have three degrees of emotion: neutral-happy, happy, and very happy. The robot is not only a music game machine: it also maximizes the relief of depression in the elderly through interactions with the robot that allow the elderly person to listen to what the robot plays and through the elderly person becoming involved and playing music along with the robot.

Context Modulation Effect by Affective Words Influencing on the Judgment of Facial Emotion (얼굴정서 판단에 미치는 감정단어의 맥락조절효과)

  • Lee, Jeongsoo;Yang, Hyeonbo;Lee, Donghoon
    • Science of Emotion and Sensibility
    • /
    • v.22 no.2
    • /
    • pp.37-48
    • /
    • 2019
  • Current research explores the effect of language on the perception of facial emotion as suggested by the psychological construction theory of emotion by using a psychophysical method. In this study, we hypothesize that the perception of facial expression may be influenced if the observer is shown an affective word before he/she judges an expression. Moreover, we suggest that his/her understanding of a facial emotion will be in line with the conceptual context that the word denotes. During the two experiments conducted for this project, a control stimulus or words representing either angry or happy emotions were briefly presented to participants before they were shown a target face. These target faces were randomly selected from seven faces that were gradually morphed to show neutral to angry (in Experiment 1) and neutral to happy (in Experiment 2) expressions. The participants were asked to perform a two-alternative forced choice (2AFC) task to judge the emotion of the target face (i.e., decide whether it is angry or neutral, or happy or neutral). The results of Experiment 1 (when compared with the control condition) showed that words denoting anger decreased the point of subjective equality (PSE) for judging the emotion of the target as anger, whereas words denoting happiness increased the PSE. Experiment 2, in which participants had to judge expressions on a scale from happy to neutral, produced a contrasting pattern of results. The outcomes of this study support the claim of the psychological construction theory of emotion that the perception of facial emotion is an active construction process that may be influenced by information (such as affective words) that provide conceptual context.

Multimodal Parametric Fusion for Emotion Recognition

  • Kim, Jonghwa
    • International journal of advanced smart convergence
    • /
    • v.9 no.1
    • /
    • pp.193-201
    • /
    • 2020
  • The main objective of this study is to investigate the impact of additional modalities on the performance of emotion recognition using speech, facial expression and physiological measurements. In order to compare different approaches, we designed a feature-based recognition system as a benchmark which carries out linear supervised classification followed by the leave-one-out cross-validation. For the classification of four emotions, it turned out that bimodal fusion in our experiment improves recognition accuracy of unimodal approach, while the performance of trimodal fusion varies strongly depending on the individual. Furthermore, we experienced extremely high disparity between single class recognition rates, while we could not observe a best performing single modality in our experiment. Based on these observations, we developed a novel fusion method, called parametric decision fusion (PDF), which lies in building emotion-specific classifiers and exploits advantage of a parametrized decision process. By using the PDF scheme we achieved 16% improvement in accuracy of subject-dependent recognition and 10% for subject-independent recognition compared to the best unimodal results.

Virtual Human Authoring ToolKit for a Senior Citizen Living Alone (독거노인용 가상 휴먼 제작 툴킷)

  • Shin, Eunji;Jo, Dongsik
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.9
    • /
    • pp.1245-1248
    • /
    • 2020
  • Elderly people living alone need smart care for independent living. Recent advances in artificial intelligence have allowed for easier interaction by a computer-controlled virtual human. This technology can realize services such as medicine intake guide for the elderly living alone. In this paper, we suggest an intelligent virtual human and present our virtual human toolkit for controlling virtual humans for a senior citizen living alone. To make the virtual human motion, we suggest our authoring toolkit to map gestures, emotions, voices of virtual humans. The toolkit configured to create virtual human interactions allows the response of a suitable virtual human with facial expressions, gestures, and voice.