• Title/Summary/Keyword: Facial expression

Search Result 627, Processing Time 0.022 seconds

A Study on Pattern of Facial Expression Presentation in Character Animation (애니메이선 캐릭터의 표정연출 유형 연구)

  • Hong Soon-Koo
    • The Journal of the Korea Contents Association
    • /
    • v.6 no.8
    • /
    • pp.165-174
    • /
    • 2006
  • Birdwhistell explains in the whole communication, language conveys only 35% of the meaning and the rest 65% is conveyed by non-linguistic media. Humans do not entirely depend on linguistic communication, but are sensitive being, using every sense of theirs. Human communication, by using facial expression, gesture as well as language, is able to convey more concrete meaning. Especially, facial expression is a many-sided message system, which delivers Individual Personality, interest, information about response and emotional status, and can be said as powerful communication tool. Though being able to be changed according to various expressive techniques and degree and quality of expression, the symbolic sign of facial expression is characterized by generalized qualify. Animation characters, as roles in story, have vitality by emotional expression of which mental world and psychological status can reveal and read naturally on their actions or facial expressions.

  • PDF

A Review of Facial Expression Recognition Issues, Challenges, and Future Research Direction

  • Yan, Bowen;Azween, Abdullah;Lorita, Angeline;S.H., Kok
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.1
    • /
    • pp.125-139
    • /
    • 2023
  • Facial expression recognition, a topical problem in the field of computer vision and pattern recognition, is a direct means of recognizing human emotions and behaviors. This paper first summarizes the datasets commonly used for expression recognition and their associated characteristics and presents traditional machine learning algorithms and their benefits and drawbacks from three key techniques of face expression; image pre-processing, feature extraction, and expression classification. Deep learning-oriented expression recognition methods and various algorithmic framework performances are also analyzed and compared. Finally, the current barriers to facial expression recognition and potential developments are highlighted.

Realtime Facial Expression Control of 3D Avatar by PCA Projection of Motion Data (모션 데이터의 PCA투영에 의한 3차원 아바타의 실시간 표정 제어)

  • Kim Sung-Ho
    • Journal of Korea Multimedia Society
    • /
    • v.7 no.10
    • /
    • pp.1478-1484
    • /
    • 2004
  • This paper presents a method that controls facial expression in realtime of 3D avatar by having the user select a sequence of facial expressions in the space of facial expressions. The space of expression is created from about 2400 frames of facial expressions. To represent the state of each expression, we use the distance matrix that represents the distances between pairs of feature points on the face. The set of distance matrices is used as the space of expressions. Facial expression of 3D avatar is controled in real time as the user navigates the space. To help this process, we visualized the space of expressions in 2D space by using the Principal Component Analysis(PCA) projection. To see how effective this system is, we had users control facial expressions of 3D avatar by using the system. This paper evaluates the results.

  • PDF

얼굴근전도와 얼굴표정으로 인한 감성의 정성적 평가에 대한 연구

  • 황민철;김지은;김철중
    • Proceedings of the ESK Conference
    • /
    • 1996.04a
    • /
    • pp.264-269
    • /
    • 1996
  • Facial expression is innate communication skill of human. Human can recognize theri psychological state by facial parameters which contain surface movement, color, humidity and etc. This study is to quantify or qualify human emotion by measurement of facial electromyography (EMG) and facial movement. The measurement is taken at the facial area of frontalis and zygomaticus The results is indicative to discriminate the positive and negative respond of emotion and to extract the parameter sensitive to positive and negative facial-expression. The facial movement according to EMG shows the possibility of non-invasive technique of human emotion.

  • PDF

facial Expression Animation Using 3D Face Modelling of Anatomy Base (해부학 기반의 3차원 얼굴 모델링을 이용한 얼굴 표정 애니메이션)

  • 김형균;오무송
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.7 no.2
    • /
    • pp.328-333
    • /
    • 2003
  • This paper did to do with 18 muscle pairs that do fetters in anatomy that influence in facial expression change and mix motion of muscle for face facial animation. After set and change mash and make standard model in individual's image, did mapping to mash using individual facial front side and side image to raise truth stuff. Muscle model who become motive power that can do animation used facial expression creation correcting Waters' muscle model. Created deformed face that texture is dressed using these method. Also, 6 facial expression that Ekman proposes did animation.

Real-time Facial Modeling and Animation based on High Resolution Capture (고해상도 캡쳐 기반 실시간 얼굴 모델링과 표정 애니메이션)

  • Byun, Hae-Won
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.8
    • /
    • pp.1138-1145
    • /
    • 2008
  • Recently, performance-driven facial animation has been popular in various area. In television or game, it is important to guarantee real-time animation for various characters with different appearances between a performer and a character. In this paper, we present a new facial animation approach based on motion capture. For this purpose, we address three issues: facial expression capture, expression mapping and facial animation. Finally, we show the results of various examination for different types of face models.

  • PDF

Dynamic Emotion Classification through Facial Recognition (얼굴 인식을 통한 동적 감정 분류)

  • Han, Wuri;Lee, Yong-Hwan;Park, Jeho;Kim, Youngseop
    • Journal of the Semiconductor & Display Technology
    • /
    • v.12 no.3
    • /
    • pp.53-57
    • /
    • 2013
  • Human emotions are expressed in various ways. It can be expressed through language, facial expression and gestures. In particular, the facial expression contains many information about human emotion. These vague human emotion appear not in single emotion, but in combination of various emotion. This paper proposes a emotional expression algorithm using Active Appearance Model(AAM) and Fuzz k- Nearest Neighbor which give facial expression in similar with vague human emotion. Applying Mahalanobis distance on the center class, determine inclusion level between center class and each class. Also following inclusion level, appear intensity of emotion. Our emotion recognition system can recognize a complex emotion using Fuzzy k-NN classifier.

Developmental Changes in Emotional-States and Facial Expression (정서 상태와 얼굴표정간의 연결 능력의 발달)

  • Park, Soo-Jin;Song, In-Hae;Ghim, Hei-Rhee;Cho, Kyung-Ja
    • Science of Emotion and Sensibility
    • /
    • v.10 no.1
    • /
    • pp.127-133
    • /
    • 2007
  • The present study investigated whether the emotional states reading ability through facial expression changes by age(3-, 5-year-old and university student groups), sex(male, female), facial expression's presenting areas(face, eyes) and the type of emotions(basic emotions, complex emotions). 32 types of emotional state's facial expressions which are linked relatively strong with the emotional vocabularies were used as stimuli. Stimuli were collected by taking photographs of professional actors facial expression performance. Each individuals were presented with stories which set off certain emotions, and then were asked to choose a facial expression that the principal character would have made for the occasion presented in stories. The result showed that the ability of facial expression reading improves as the age get higher. Also, they performed better with the condition of face than eyes, and basic emotions than complex emotions. While female doesn't show any performance difference with the presenting areas, male shows better performance in case of facial condition compared with eye condition. The results demonstrate that age, facial expression's presenting areas and the type of emotions effect on estimation of other people's emotion through facial expressions.

  • PDF

A Design of Stress Measurement System using Facial and Verbal Sentiment Analysis (표정과 언어 감성 분석을 통한 스트레스 측정시스템 설계)

  • Yuw, Suhwa;Chun, Jiwon;Lee, Aejin;Kim, Yoonhee
    • KNOM Review
    • /
    • v.24 no.2
    • /
    • pp.35-47
    • /
    • 2021
  • Various stress exists in a modern society, which requires constant competition and improvement. A person under stress often shows his pressure in his facial expression and language. Therefore, it is possible to measure the pressure using facial expression and language analysis. The paper proposes a stress measurement system using facial expression and language sensitivity analysis. The method analyzes the person's facial expression and language sensibility to derive the stress index based on the main emotional value and derives the integrated stress index based on the consistency of facial expression and language. The quantification and generalization of stress measurement enables many researchers to evaluate the stress index objectively in general.

Robust Facial Expression Recognition Based on Local Directional Pattern

  • Jabid, Taskeed;Kabir, Md. Hasanul;Chae, Oksam
    • ETRI Journal
    • /
    • v.32 no.5
    • /
    • pp.784-794
    • /
    • 2010
  • Automatic facial expression recognition has many potential applications in different areas of human computer interaction. However, they are not yet fully realized due to the lack of an effective facial feature descriptor. In this paper, we present a new appearance-based feature descriptor, the local directional pattern (LDP), to represent facial geometry and analyze its performance in expression recognition. An LDP feature is obtained by computing the edge response values in 8 directions at each pixel and encoding them into an 8 bit binary number using the relative strength of these edge responses. The LDP descriptor, a distribution of LDP codes within an image or image patch, is used to describe each expression image. The effectiveness of dimensionality reduction techniques, such as principal component analysis and AdaBoost, is also analyzed in terms of computational cost saving and classification accuracy. Two well-known machine learning methods, template matching and support vector machine, are used for classification using the Cohn-Kanade and Japanese female facial expression databases. Better classification accuracy shows the superiority of LDP descriptor against other appearance-based feature descriptors.