• 제목/요약/키워드: Face expression

검색결과 453건 처리시간 0.033초

On the Implementation of a Facial Animation Using the Emotional Expression Techniques (FAES : 감성 표현 기법을 이용한 얼굴 애니메이션 구현)

  • Kim Sang-Kil;Min Yong-Sik
    • The Journal of the Korea Contents Association
    • /
    • 제5권2호
    • /
    • pp.147-155
    • /
    • 2005
  • In this paper, we present a FAES(a Facial Animation with Emotion and Speech) system for speech-driven face animation with emotions. We animate face cartoons not only from input speech, but also based on emotions derived from speech signal. And also our system can ensure smooth transitions and exact representation in animation. To do this, after collecting the training data, we have made the database using SVM(Support Vector Machine) to recognize four different categories of emotions: neutral, dislike, fear and surprise. So that, we can make the system for speech-driven animation with emotions. Also, we trained on Korean young person and focused on only Korean emotional face expressions. Experimental results of our system demonstrate that more emotional areas expanded and the accuracies of the emotional recognition and the continuous speech recognition are respectively increased 7% and 5% more compared with the previous method.

  • PDF

Development of An Interactive System Prototype Using Imitation Learning to Induce Positive Emotion (긍정감정을 유도하기 위한 모방학습을 이용한 상호작용 시스템 프로토타입 개발)

  • Oh, Chanhae;Kang, Changgu
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • 제14권4호
    • /
    • pp.239-246
    • /
    • 2021
  • In the field of computer graphics and HCI, there are many studies on systems that create characters and interact naturally. Such studies have focused on the user's response to the user's behavior, and the study of the character's behavior to elicit positive emotions from the user remains a difficult problem. In this paper, we develop a prototype of an interaction system to elicit positive emotions from users according to the movement of virtual characters using artificial intelligence technology. The proposed system is divided into face recognition and motion generation of a virtual character. A depth camera is used for face recognition, and the recognized data is transferred to motion generation. We use imitation learning as a learning model. In motion generation, random actions are performed according to the first user's facial expression data, and actions that the user can elicit positive emotions are learned through continuous imitation learning.

The Emotional Boundary Decision in a Linear Affect-Expression Space for Effective Robot Behavior Generation (효과적인 로봇 행동 생성을 위한 선형의 정서-표정 공간 내 감정 경계의 결정 -비선형의 제스처 동기화를 위한 정서, 표정 공간의 영역 결정)

  • Jo, Su-Hun;Lee, Hui-Sung;Park, Jeong-Woo;Kim, Min-Gyu;Chung, Myung-Jin
    • 한국HCI학회:학술대회논문집
    • /
    • 한국HCI학회 2008년도 학술대회 1부
    • /
    • pp.540-546
    • /
    • 2008
  • In the near future, robots should be able to understand human's emotional states and exhibit appropriate behaviors accordingly. In Human-Human Interaction, the 93% consist of the speaker's nonverbal communicative behavior. Bodily movements provide information of the quantity of emotion. Latest personal robots can interact with human using multi-modality such as facial expression, gesture, LED, sound, sensors and so on. However, a posture needs a position and an orientation only and in facial expression or gesture, movements are involved. Verbal, vocal, musical, color expressions need time information. Because synchronization among multi-modalities is a key problem, emotion expression needs a systematic approach. On the other hand, at low intensity of surprise, the face could be expressed but the gesture could not be expressed because a gesture is not linear. It is need to decide the emotional boundaries for effective robot behavior generation and synchronization with another expressible method. If it is so, how can we define emotional boundaries? And how can multi-modality be synchronized each other?

  • PDF

A Noisy-Robust Approach for Facial Expression Recognition

  • Tong, Ying;Shen, Yuehong;Gao, Bin;Sun, Fenggang;Chen, Rui;Xu, Yefeng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제11권4호
    • /
    • pp.2124-2148
    • /
    • 2017
  • Accurate facial expression recognition (FER) requires reliable signal filtering and the effective feature extraction. Considering these requirements, this paper presents a novel approach for FER which is robust to noise. The main contributions of this work are: First, to preserve texture details in facial expression images and remove image noise, we improved the anisotropic diffusion filter by adjusting the diffusion coefficient according to two factors, namely, the gray value difference between the object and the background and the gradient magnitude of object. The improved filter can effectively distinguish facial muscle deformation and facial noise in face images. Second, to further improve robustness, we propose a new feature descriptor based on a combination of the Histogram of Oriented Gradients with the Canny operator (Canny-HOG) which can represent the precise deformation of eyes, eyebrows and lips for FER. Third, Canny-HOG's block and cell sizes are adjusted to reduce feature dimensionality and make the classifier less prone to overfitting. Our method was tested on images from the JAFFE and CK databases. Experimental results in L-O-Sam-O and L-O-Sub-O modes demonstrated the effectiveness of the proposed method. Meanwhile, the recognition rate of this method is not significantly affected in the presence of Gaussian noise and salt-and-pepper noise conditions.

The Partial Transformation of Clothing Construction in Modern Fashion (현대 패션에 나타난 의복구성의 부분 변형)

  • Kim, Young-Ran
    • Journal of the Korea Fashion and Costume Design Association
    • /
    • 제9권1호
    • /
    • pp.103-122
    • /
    • 2007
  • Fashion have been expressed by a face through various a period, social change, and various culture changing. Human expresses as "Transformation" by applying these needs of changing to the fashion. A origin tribe expressed its own self by using direct body transformation and extreme decorations in the past. However, human express creative and esthetic desire based on shape, material, and transformation method in the present time characteristics of the body. Exceptional transformation breaking a existing fixed idea appeared frequently due to dissolution through transformation which is positive expression method in the fashion from 20th century. As a results, followings are obtained in this study. First, human body transforms by using a tool or intermediation in investigation of aesthetic meaning for human body. The object, Transformation, is stably transformed by disintegration, distortion, exaggeration and simplification reduction, as design's sensitivity. Second, transformation from relation of clothing composition is expressed by extension, reduction, simplification, and dissolution. In transformation from original tribe's sensitivity, past decorative desire lead to transformation of human body. To give variable change from past to present fashion, external formative will is introduced. Then, extreme expression is made by direct transformation of clothing type. It seems to be accomplished that human body's expression method is continuously changed into extension, exaggeration, reduction, and dissolution from transformation method as described before. Transformation of modem fashion is expression method by creative supervision. Extreme transformation substituted body's each part is based on immanent play and representative satisfaction. Through these transformation, it is judged that variety of creative type is achieved.

  • PDF

Changes of Gene Expression in NIH3T3 Cells Exposed to Osmotic and Oxidative Stresses

  • Lee, Jae-Seon;Jung, Ji-Hun;Kim, Tae-Hyung;Seo, Jeong-Sun
    • Genomics & Informatics
    • /
    • 제2권2호
    • /
    • pp.67-74
    • /
    • 2004
  • Cells consistently face stressful conditions, which cause them to modulate a variety of intracellular processes and adapt to these environmental changes via regulation of gene expression. Hyperosmotic and oxidative stresses are significant stressors that induce cellular damage, and finally cell death. In this study, oligonucleotide microarrays were employed to investigate mRNA level changes in cells exposed to hyperosmotic or oxidative conditions. In addition, since heat shock protein 70 (HSP70) is one of the most inducible stress proteins and plays pivotal role to protect cells against stressful condition, we performed microarray analysis in HSP70-overexpressing cells to identify the genes expressed in a HSP70-dependent manner. Under hyperosmotic or oxidative stress conditions, a variety of genes showed altered expression. Down­regulation of protein phosphatase1 beta (PP1 beta) and sphingosine-1-phosphate phosphatase 1 (SPPase1) was detected in both stress conditions. Microarray analysis of HSP70-overexpressing cells demonstrated that diverse mRNA species depend on the level of cellular HSP70. Genes encoding Iysyl oxidase, thrombospondin 1, and procollagen displayed altered expression in all tested conditions. The results of this study will be useful to construct networks of stress response genes.

The Effects of the Emotion Regulation Strategy to the Disgust Stimulus on Facial Expression and Emotional Experience (혐오자극에 대한 정서조절전략이 얼굴표정 및 정서경험에 미치는 영향)

  • Jang, Sung-Lee;Lee, Jang-Han
    • Korean Journal of Health Psychology
    • /
    • 제15권3호
    • /
    • pp.483-498
    • /
    • 2010
  • This study is to examine the effects of emotion regulation strategies in facial expressions and emotional experiences, based on the facial expressions of groups, using antecedent- and response- focused regulation. 50 female undergraduate students were instructed to use different emotion regulation strategies during the viewing of a disgust inducing film. While watching, their facial expressions and emotional experiences were measured. As a result, participants showed the highest frequency of action units related to disgust in the EG(expression group), and they reported in the following order of DG(expressive dissonance group), CG(cognitive reappraisal group), and SG(expressive suppression group). Also, the upper region of the face reflected real emotions. In this region, the frequency of action units related to disgust were lower in the CG than in the EG or DG. The results of the PANAS indicated the largest decrease of positive emotions reported in the DG, but an increase of positive emotions reported in the CG. This study suggests that cognitive reappraisal to an event is a more functional emotion regulation strategy compared to other strategies related to facial expression and emotional experience that affect emotion regulation strategies.

The Characteristics of Taeyangin on Body Shape, Face, Voice and Temperament (태양인 체형, 안면, 음성, 성격 특성)

  • Jang, Eun-Su;Do, Jun-Hyeong;Jang, Jun-Su;Ku, Bon-Cho;Yoo, Jong-Hyang;Choi, Hee-Seok;Lee, Si-Woo
    • Journal of Sasang Constitutional Medicine
    • /
    • 제25권3호
    • /
    • pp.145-157
    • /
    • 2013
  • Objectives This study was aimed to reveal the characteristics of the body shape, face, voice and temperament in Taeyangin. Methods The subjects were recruited from November 2005 to August 2012. Sasang constitutional specialist in each clinics confirmed the Sasang Constitution. Taeyangin (TY) became a standard guidance to be compared with each other Sasang type. Anova test was used in analyzing continuous variables and factor analysis was conducted in temperament questionnaire in advance. Generalized propensity score with age and body mass index (BMI) was used in adjusted model. Significant level was .05 Results 1. The TY body shape were generally smaller than Taeeumin (TE) (p<0.001) and Soyangin (SY) (p<0.05) in crude. The TY body shape were still smaller than TE (p<0.05) and there was no significant difference between TY and SY except rib circumference in males and forehead circumference (p<0.05) in females in adjusted model. 2. The size of face and nose in TY was smaller than in TE and there was different between males and females' TY and others in the characteristics of eye, nose and forehead variables in crude (p<0.05). Most of differences between TY and TE were disappeared in adjusted model. 3. The vocal height and speed of TY was different other types and there was different between males and females' TY in some of frequency change rate in crude (p<0.05). Most of differences between TY and other types were similar before and after adjusted model. 4. The temperament of TY was different with SE before and after adjusted mode 1 (p<0.05). TY males showed difference in expression factor and TY females showed difference in behavior factor compared with TE respectively (p<0.05). Conclusions This study reveals characteristics of body shape, face, voice and character in the TY males females compared with each other type respectively.

Automatic Anticipation Generation for 3D Facial Animation (3차원 얼굴 표정 애니메이션을 위한 기대효과의 자동 생성)

  • Choi Jung-Ju;Kim Dong-Sun;Lee In-Kwon
    • Journal of KIISE:Computer Systems and Theory
    • /
    • 제32권1호
    • /
    • pp.39-48
    • /
    • 2005
  • According to traditional 2D animation techniques, anticipation makes an animation much convincing and expressive. We present an automatic method for inserting anticipation effects to an existing facial animation. Our approach assumes that an anticipatory facial expression can be found within an existing facial animation if it is long enough. Vertices of the face model are classified into a set of components using principal components analysis directly from a given hey-framed and/or motion -captured facial animation data. The vortices in a single component will have similar directions of motion in the animation. For each component, the animation is examined to find an anticipation effect for the given facial expression. One of those anticipation effects is selected as the best anticipation effect, which preserves the topology of the face model. The best anticipation effect is automatically blended with the original facial animation while preserving the continuity and the entire duration of the animation. We show experimental results for given motion-captured and key-framed facial animations. This paper deals with a part of broad subject an application of the principles of traditional 2D animation techniques to 3D animation. We show how to incorporate anticipation into 3D facial animation. Animators can produce 3D facial animation with anticipation simply by selecting the facial expression in the animation.

Model for stage make-up design by personality types based on Physiognomy - Focused on eyebrows and eyes make-up design - (관상학에 근거한 성격유형별 무대 분장디자인 모형 연구 -눈썹과 눈 디자인을 중심으로-)

  • Jeon, In-Mi;Lee, Hye-Joo
    • Archives of design research
    • /
    • 제20권1호
    • /
    • pp.273-286
    • /
    • 2007
  • The performing art is a genre which is integrated of various fields of arts, and Stage Make-up as well. Stage Make-up functions as an important means of communication which makes audience understand the characteristic facts by transforming an actor/actress into the character in the play and also, visually enlarging characters. Due to the creativeness of Stage Make-up, the conceptual study on Stage Make-up design for visual expression coincide with the intended character is needed. However, in Korea, the systematical approach for Stage Make-up design by character types has not developed, or studied yet, The result of this study is to present system model through theoretical study of psychological personality types. and to apply for the expression of characters in the play based on physiognomy(face-reading), The case study of visualizing according to an actor/actress's shape of face. is Specially focused on eyes and eyebrows. In designing Stage Make-up, eyes and eye brows are important elements as a tool for strong expression of the characters. This model of Stage Make-up design by personal type constructed in this study, is for applying methodically as professional approach on actual Stage Make-up design and educational teaching methods as well.

  • PDF