• Title/Summary/Keyword: Facial Emotions

Search Result 159, Processing Time 0.028 seconds

Divide and Conquer Strategy for CNN Model in Facial Emotion Recognition based on Thermal Images (얼굴 열화상 기반 감정인식을 위한 CNN 학습전략)

  • Lee, Donghwan;Yoo, Jang-Hee
    • Journal of Software Assessment and Valuation
    • /
    • v.17 no.2
    • /
    • pp.1-10
    • /
    • 2021
  • The ability to recognize human emotions by computer vision is a very important task, with many potential applications. Therefore the demand for emotion recognition using not only RGB images but also thermal images is increasing. Compared to RGB images, thermal images has the advantage of being less affected by lighting conditions but require a more sophisticated recognition method with low-resolution sources. In this paper, we propose a Divide and Conquer-based CNN training strategy to improve the performance of facial thermal image-based emotion recognition. The proposed method first trains to classify difficult-to-classify similar emotion classes into the same class group by confusion matrix analysis and then divides and solves the problem so that the emotion group classified into the same class group is recognized again as actual emotions. In experiments, the proposed method has improved accuracy in all the tests than when recognizing all the presented emotions with a single CNN model.

Difference of Facial Skin Temperature Responses between Fear and Joy (공포와 기쁨 정서 간 안면온도 반응의 차이)

  • Eum, Yeong-Ji;Eom, Jin-Sup;Sohn, Jin-Hun
    • Science of Emotion and Sensibility
    • /
    • v.15 no.1
    • /
    • pp.1-8
    • /
    • 2012
  • There have been many emotion researches to investigate physiological responses on specific emotions with physiological parameters such as heart rate, blood volume flow, and skin conductance. Very few researches, however, exists by detecting them with facial skin temperature. The purpose of present study was to observe the differences of facial skin temperature by using thermal camera, when participants stimulated by monitor scenes which could evoke fear or joy. There were totally 98 of participants; undergraduate students who were in their adult age and middle, high school students who were in their adolescence. We measured their facial temperature, before and after presenting emotional stimulus to see changes between both times. Temperature values were extracted in these regions; forehead, inner corners of the eyes, bridge of the nose, end of the nose, and cheeks. Temperature values in bridge and end of the nose were significantly decreased in fear emotion stimulated. There was also significant temperature increase in the area of forehead and the inner corners of the eyes, while the temperature value in end of the nose decreased. It showed decrease in both stimulated fear and joy. These results might be described as follows: When arousal level going up, sympathetic nervous activity increases, and in turn it makes blood flow in peripheral vessels under the nose decrease. Facial temperature changes by fear or joy in this study were the same as the previous studies which measured temperature of finger tip, when participants experiencing emotions. Our results may help to develop emotion-measuring techniques and establish computer system bases which are to detect human emotions.

  • PDF

Children's Interpretation of Facial Expression onto Two-Dimension Structure of Emotion (정서의 이차원 구조에서 유아의 얼굴표정 해석)

  • Shin, Young-Suk;Chung, Hyun-Sook
    • Korean Journal of Cognitive Science
    • /
    • v.18 no.1
    • /
    • pp.57-68
    • /
    • 2007
  • This study explores children's categories of emotion understanding from facial expressions onto two dimensional structure of emotion. Children of 89 from 3 to 5 years old were required to those facial expressions related the fourteen emotion terms. Facial expressions applied for experiment are used the photographs rated the degree of expression in each of the two dimensions (pleasure-displeasure dimension and arousal-sleep dimension) on a nine-point scale from 54 university students. The experimental results showed that children indicated the greater stability in arousal dimension than stability in pleasure-displeasure dimension. Emotions about sadness, sleepiness, anger and surprise onto two dimensions was understand very well, but emotions about fear, boredom were showed instability in pleasure-displeasure dimension. Specifically, 3 years old children indicated highly the perception in a degree of arousal-sleep than perception of pleasure-displeasure.

  • PDF

Sex differences of children's facial expression discrimination based on two-dimensional model of emotion (정서의 이차원모델에서 아동의 얼굴표정 변별에서 성 차이)

  • Shin, Young-Suk
    • Korean Journal of Cognitive Science
    • /
    • v.21 no.1
    • /
    • pp.127-143
    • /
    • 2010
  • This study explores children's sex differences of emotion discrimination from facial expressions based on two dimensional model of emotion. The study group consisted of 92 children, of 40, 52, and 64 months of age, and the rate of male and female children was male children (50%) and female children (50%). Children of 92 were required to choose facial expressions related the twelve emotion terms. Facial expressions applied for experiment are used the photographs rated the degree of expression in each of the two dimensions (pleasure-displeasure dimension and arousal-sleep dimension) on a nine-point scale from 54 university students. The experimental findings appeared that the sex differences were distinctly the arousal-sleep dimension than the pleasure-displeasure dimension. In the arousal-sleep dimensionoussleepness, anger, comfort, and loneliness' emotions showed large sex differences over 1 value. Especially, while male children showed high arousal more than female children in the emotions like 'sleepiness, anger and loneliness', female children showed high arousal more than male children in 'comfort' emotion.

  • PDF

Impact Analysis of nonverbal multimodals for recognition of emotion expressed virtual humans (가상 인간의 감정 표현 인식을 위한 비언어적 다중모달 영향 분석)

  • Kim, Jin Ok
    • Journal of Internet Computing and Services
    • /
    • v.13 no.5
    • /
    • pp.9-19
    • /
    • 2012
  • Virtual human used as HCI in digital contents expresses his various emotions across modalities like facial expression and body posture. However, few studies considered combinations of such nonverbal multimodal in emotion perception. Computational engine models have to consider how a combination of nonverbal modal like facial expression and body posture will be perceived by users to implement emotional virtual human, This paper proposes the impacts of nonverbal multimodal in design of emotion expressed virtual human. First, the relative impacts are analysed between different modals by exploring emotion recognition of modalities for virtual human. Then, experiment evaluates the contribution of the facial and postural congruent expressions to recognize basic emotion categories, as well as the valence and activation dimensions. Measurements are carried out to the impact of incongruent expressions of multimodal on the recognition of superposed emotions which are known to be frequent in everyday life. Experimental results show that the congruence of facial and postural expression of virtual human facilitates perception of emotion categories and categorical recognition is influenced by the facial expression modality, furthermore, postural modality are preferred to establish a judgement about level of activation dimension. These results will be used to implementation of animation engine system and behavior syncronization for emotion expressed virtual human.

Research about the Abstraction of Area Typicality of Emotions for Systematization of Human's Sensitivity Symbol (인간의 감성기호 체계화를 위한 감정영역범주화에 관한 연구)

  • Yun Bong-Shik
    • The Journal of the Korea Contents Association
    • /
    • v.5 no.2
    • /
    • pp.137-145
    • /
    • 2005
  • This study is a model of research for the developing 3D character contents about facial expression as a sort of non-linguistic signs, focusing on an expression of emotion factors of a person. It contributes a framework for symbolic analysis about Human's emotions along with a general review of expression. The human face is the most complex and versatile of all species. For humans, the face is a ich and versatile instrument serving many different functions. It serves as a window to display one's own motivational state. This makes one's behavior more predictable and understandable to others and improves communication. The face can be used to supplement verbal communication. A prompt facial display can reveal the speaker's attitude about the information being conveyed. Alternatively, the face can be used to complement verbal communication, such as lifting of eyebrows to lend additional emphasis to stressed word. The facial expression plays a important role under the digital visual context. This study will present a frame of facial expression categories for effective manufacture of cartoon and animation that appeal to the visual emotion of the human.

  • PDF

The Effects of Chatbot Anthropomorphism and Self-disclosure on Mobile Fashion Consumers' Intention to Use Chatbot Services

  • Kim, Minji;Park, Jiyeon;Lee, MiYoung
    • Journal of Fashion Business
    • /
    • v.25 no.6
    • /
    • pp.119-130
    • /
    • 2021
  • This study investigated the effects of the chatbot's level of anthropomorphism - closeness to the human form - and its self-disclosure - delivery of emotional exchange with the chatbot through its facial expressions and chatting message on the user's intention to accept the service. A 2 (anthropomorphism: High vs. Low) × 2 (self-disclosure through facial expressions: High vs. Low) × 2 (self-disclosure through conversation: High vs. Low) between-subject factorial design was employed for this study. An online survey was conducted and a total of 234 questionnaires were used in the analysis. The results showed that consumers used chatbot service more when emotions were disclosed through facial expressions, than when it disclosed fewer facial expressions. There was statistically significant interaction effect, indicating the relationship between chatbot's self-disclosure through facial expression and the consumers' intention to use chatbot service differs depending on the extent of anthropomorphism. In the case of "robot chatbots" with low anthropomorphism levels, there was no difference in intention to use chatbot service depending on the level of self-disclosure through facial expression. When the "human-like chatbot" with high anthropomorphism levels discloses itself more through facial expressions, consumer's intention to use the chatbot service increased much more than when the human-like chatbot disclosed fewer facial expressions. The findings suggest that chatbots' self-disclosure plays an important role in the formation of consumer perception.

Emotion Recognition and Expression Method using Bi-Modal Sensor Fusion Algorithm (다중 센서 융합 알고리즘을 이용한 감정인식 및 표현기법)

  • Joo, Jong-Tae;Jang, In-Hun;Yang, Hyun-Chang;Sim, Kwee-Bo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.8
    • /
    • pp.754-759
    • /
    • 2007
  • In this paper, we proposed the Bi-Modal Sensor Fusion Algorithm which is the emotional recognition method that be able to classify 4 emotions (Happy, Sad, Angry, Surprise) by using facial image and speech signal together. We extract the feature vectors from speech signal using acoustic feature without language feature and classify emotional pattern using Neural-Network. We also make the feature selection of mouth, eyes and eyebrows from facial image. and extracted feature vectors that apply to Principal Component Analysis(PCA) remakes low dimension feature vector. So we proposed method to fused into result value of emotion recognition by using facial image and speech.

A Review of Facial Expression Recognition Issues, Challenges, and Future Research Direction

  • Yan, Bowen;Azween, Abdullah;Lorita, Angeline;S.H., Kok
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.1
    • /
    • pp.125-139
    • /
    • 2023
  • Facial expression recognition, a topical problem in the field of computer vision and pattern recognition, is a direct means of recognizing human emotions and behaviors. This paper first summarizes the datasets commonly used for expression recognition and their associated characteristics and presents traditional machine learning algorithms and their benefits and drawbacks from three key techniques of face expression; image pre-processing, feature extraction, and expression classification. Deep learning-oriented expression recognition methods and various algorithmic framework performances are also analyzed and compared. Finally, the current barriers to facial expression recognition and potential developments are highlighted.