• Title/Summary/Keyword: Facial Emotions

Search Result 159, Processing Time 0.033 seconds

Comparative Study on Seven Emotions and Four Energies (칠정(七情)과 사기(四氣)에 대한 비교 연구)

  • Choi, Sung-Wook;Kang, Jung-Soo
    • Journal of Physiology & Pathology in Korean Medicine
    • /
    • v.19 no.3
    • /
    • pp.596-599
    • /
    • 2005
  • Human health is affected by not only physical conditions but also mental and social well-being. Changes of human emotions show up as gestures, facial expressions and sweating. Human emotions are affected by such automatic nerve system functions as blood pressure, blood circulation speed, heart beats, pupillary reflex, fluid transfusion, muscular contraction and digestive organs, all of which influence the holistic diseases. The Oriental Medicine sees from a perspective of unity of divinity and men that human life activities are united in terms of their physical and mental functions. From such a perspective, human Five Organs are linked with Five Mental(五神) and Seven Emotions(七情), while they are affected by each other, influencing the life activities both directly and indirectly. Based on Confucianism, Sa-Sang Theory argues that human emotions can be categorized into four energy states and therefore, that human diseases and physiological conditions there of may be determined differently depending on the Four Energies(四氣). There seems to be some common points between Sa-Sang Theory and the conventional Oriental Medicine in that human emotions affect individuals' health conditions, so there seems to be much room for mutual complementation.

Automatic 3D Facial Movement Detection from Mirror-reflected Multi-Image for Facial Expression Modeling (거울 투영 이미지를 이용한 3D 얼굴 표정 변화 자동 검출 및 모델링)

  • Kyung, Kyu-Min;Park, Mignon;Hyun, Chang-Ho
    • Proceedings of the KIEE Conference
    • /
    • 2005.05a
    • /
    • pp.113-115
    • /
    • 2005
  • This thesis presents a method for 3D modeling of facial expression from frontal and mirror-reflected multi-image. Since the proposed system uses only one camera, two mirrors, and simple mirror's property, it is robust, accurate and inexpensive. In addition, we can avoid the problem of synchronization between data among different cameras. Mirrors located near one's cheeks can reflect the side views of markers on one's face. To optimize our system, we must select feature points of face intimately associated with human's emotions. Therefore we refer to the FDP (Facial Definition Parameters) and FAP (Facial Animation Parameters) defined by MPEG-4 SNHC (Synlhetic/Natural Hybrid Coding). We put colorful dot markers on selected feature points of face to detect movement of facial deformation when subject makes variety expressions. Before computing the 3D coordinates of extracted facial feature points, we properly grouped these points according to relative part. This makes our matching process automatically. We experiment on about twenty koreans the subject of our experiment in their late twenties and early thirties. Finally, we verify the performance of the proposed method tv simulating an animation of 3D facial expression.

  • PDF

A Generation Method of Comic Facial Expressions for Intelligent Avatar Communications (지적 아바타 통신을 위한 코믹한 얼굴 표정의 생성법)

  • ;;Yoshinao Aoki
    • Proceedings of the IEEK Conference
    • /
    • 2000.11d
    • /
    • pp.227-230
    • /
    • 2000
  • The sign-language can be used as an auxiliary communication means between avatars of different languages in cyberspace. At that time, an intelligent communication method can also be utilized to achieve real-time communication, where intelligently coded data (joint angles for arm gestures and action units for facial emotions) are transmitted instead of real pictures. In this paper, a method of generating the facial gesture CG animation on different avatar models is provided. At first, to edit emotional expressions efficiently, a comic-style facial model having only eyebrows, eyes, nose, and mouth is employed. Then generation of facial emotion animation with the parameters is also investigated. Experimental results show a possibility that the method could be used for the intelligent avatar communications between Korean and Japanese.

  • PDF

Life-like Facial Expression of Mascot-Type Robot Based on Emotional Boundaries (감정 경계를 이용한 로봇의 생동감 있는 얼굴 표정 구현)

  • Park, Jeong-Woo;Kim, Woo-Hyun;Lee, Won-Hyong;Chung, Myung-Jin
    • The Journal of Korea Robotics Society
    • /
    • v.4 no.4
    • /
    • pp.281-288
    • /
    • 2009
  • Nowadays, many robots have evolved to imitate human social skills such that sociable interaction with humans is possible. Socially interactive robots require abilities different from that of conventional robots. For instance, human-robot interactions are accompanied by emotion similar to human-human interactions. Robot emotional expression is thus very important for humans. This is particularly true for facial expressions, which play an important role in communication amongst other non-verbal forms. In this paper, we introduce a method of creating lifelike facial expressions in robots using variation of affect values which consist of the robot's emotions based on emotional boundaries. The proposed method was examined by experiments of two facial robot simulators.

  • PDF

Text-driven Speech Animation with Emotion Control

  • Chae, Wonseok;Kim, Yejin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.8
    • /
    • pp.3473-3487
    • /
    • 2020
  • In this paper, we present a new approach to creating speech animation with emotional expressions using a small set of example models. To generate realistic facial animation, two example models called key visemes and expressions are used for lip-synchronization and facial expressions, respectively. The key visemes represent lip shapes of phonemes such as vowels and consonants while the key expressions represent basic emotions of a face. Our approach utilizes a text-to-speech (TTS) system to create a phonetic transcript for the speech animation. Based on a phonetic transcript, a sequence of speech animation is synthesized by interpolating the corresponding sequence of key visemes. Using an input parameter vector, the key expressions are blended by a method of scattered data interpolation. During the synthesizing process, an importance-based scheme is introduced to combine both lip-synchronization and facial expressions into one animation sequence in real time (over 120Hz). The proposed approach can be applied to diverse types of digital content and applications that use facial animation with high accuracy (over 90%) in speech recognition.

The Facial Expression Recognition using the Inclined Face Geometrical information

  • Zhao, Dadong;Deng, Lunman;Song, Jeong-Young
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2012.10a
    • /
    • pp.881-886
    • /
    • 2012
  • The paper is facial expression recognition based on the inclined face geometrical information. In facial expression recognition, mouth has a key role in expressing emotions, in this paper the features is mainly based on the shapes of mouth, followed by eyes and eyebrows. This paper makes its efforts to disperse every feature values via the weighting function and proposes method of expression classification with excellent classification effects; the final recognition model has been constructed.

  • PDF

Dynamic Emotion Classification through Facial Recognition (얼굴 인식을 통한 동적 감정 분류)

  • Han, Wuri;Lee, Yong-Hwan;Park, Jeho;Kim, Youngseop
    • Journal of the Semiconductor & Display Technology
    • /
    • v.12 no.3
    • /
    • pp.53-57
    • /
    • 2013
  • Human emotions are expressed in various ways. It can be expressed through language, facial expression and gestures. In particular, the facial expression contains many information about human emotion. These vague human emotion appear not in single emotion, but in combination of various emotion. This paper proposes a emotional expression algorithm using Active Appearance Model(AAM) and Fuzz k- Nearest Neighbor which give facial expression in similar with vague human emotion. Applying Mahalanobis distance on the center class, determine inclusion level between center class and each class. Also following inclusion level, appear intensity of emotion. Our emotion recognition system can recognize a complex emotion using Fuzzy k-NN classifier.

An Intelligent Emotion Recognition Model Using Facial and Bodily Expressions

  • Jae Kyeong Kim;Won Kuk Park;Il Young Choi
    • Asia pacific journal of information systems
    • /
    • v.27 no.1
    • /
    • pp.38-53
    • /
    • 2017
  • As sensor technologies and image processing technologies make collecting information on users' behavior easy, many researchers have examined automatic emotion recognition based on facial expressions, body expressions, and tone of voice, among others. Specifically, many studies have used normal cameras in the multimodal case using facial and body expressions. Thus, previous studies used a limited number of information because normal cameras generally produce only two-dimensional images. In the present research, we propose an artificial neural network-based model using a high-definition webcam and Kinect to recognize users' emotions from facial and bodily expressions when watching a movie trailer. We validate the proposed model in a naturally occurring field environment rather than in an artificially controlled laboratory environment. The result of this research will be helpful in the wide use of emotion recognition models in advertisements, exhibitions, and interactive shows.

Examination of explicit and implicit emotions and relationship with the intention to support breastfeeding in public: a descriptive study

  • Katilin D. Overgaard;Lauren M. Dinour;Adrian L. Kerrihard;Yeon K. Bai
    • Korean Journal of Community Nutrition
    • /
    • v.28 no.2
    • /
    • pp.114-123
    • /
    • 2023
  • Objectives: Current social norms in the United States do not favor breastfeeding in public. This study examined associations between college students' explicit and implicit emotions of breastfeeding in public and their intention to support public breastfeeding. Methods: Twenty-two student participants viewed images of a breastfeeding woman with a fully-covered, fully-exposed, or partially-exposed breast in a public setting. After viewing each image, participants' explicit emotions (self-reported) of the image were measured using a questionnaire and their implicit emotions (facial expression) were measured using FaceReader technology. We examined if a relationship exists between both emotions [toward images] and intention to support breastfeeding in public using correlation techniques. We determined the relative influence of two emotions on the intention to support breastfeeding in public using regression analyses. Results: The nursing images depicting a fully-covered breast (r = 0.425, P = 0.049 vs. r = 0.271, P = 0.222) and fully-exposed breast (r = 0.437, P = 0.042 vs. r = 0.317, P = 0.150) had stronger associations with explicit emotions and intention to support breastfeeding in public compared to implicit emotions and intention. Breastfeeding knowledge was associated with a positive explicit emotion for images with partial- (β = 0.60, P = 0.003) and full-breast exposure (β = 0.65, P = 0.002). Conclusions: Explicit emotions appear to drive stated intentions to support public breastfeeding. Further research is needed to understand the disconnect between explicit and implicit emotions, the factors that influence these emotions, and whether stated intentions lead to consistent behavior.

A Real-time Interactive Shadow Avatar with Facial Emotions (감정 표현이 가능한 실시간 반응형 그림자 아바타)

  • Lim, Yang-Mi;Lee, Jae-Won;Hong, Euy-Seok
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.4
    • /
    • pp.506-515
    • /
    • 2007
  • In this paper, we propose a Real-time Interactive Shadow Avatar(RISA) which can express facial emotions changing as response of user's gestures. The avatar's shape is a virtual Shadow constructed from the real-time sampled picture of user's shape. Several predefined facial animations overlap on the face area of the virtual Shadow, according to the types of hand gestures. We use the background subtraction method to separate the virtual Shadow, and a simplified region-based tracking method is adopted for tracking hand positions and detecting hand gestures. In order to express smooth change of emotions, we use a refined morphing method which uses many more frames in contrast with traditional dynamic emoticons. RISA can be directly applied to the area of interface media arts and we expect the detecting scheme of RISA would be utilized as an alternative media interface for DMB and camera phones which need simple input devices, in the near future.

  • PDF