• Title/Summary/Keyword: Facial Emotions

Search Result 159, Processing Time 0.03 seconds

Emotion Recognition of Korean and Japanese using Facial Images (얼굴영상을 이용한 한국인과 일본인의 감정 인식 비교)

  • Lee, Dae-Jong;Ahn, Ui-Sook;Park, Jang-Hwan;Chun, Myung-Geun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.15 no.2
    • /
    • pp.197-203
    • /
    • 2005
  • In this paper, we propose an emotion recognition using facial Images to effectively design human interface. Facial database consists of six basic human emotions including happiness, sadness, anger, surprise, fear and dislike which have been known as common emotions regardless of nation and culture. Emotion recognition for the facial images is performed after applying the discrete wavelet. Here, the feature vectors are extracted from the PCA and LDA. Experimental results show that human emotions such as happiness, sadness, and anger has better performance than surprise, fear and dislike. Expecially, Japanese shows lower performance for the dislike emotion. Generally, the recognition rates for Korean have higher values than Japanese cases.

Facial EMG pattern evoked by pleasant and unpleasant odor stimulus

  • Yamada, Hiroshi;Kaneki, Noriaki;Shimada, Koji;Okii, Hironori
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 2002.05a
    • /
    • pp.11-15
    • /
    • 2002
  • Activities of venter frontalis, corrugator, levator labii superioris and greater zygomatic muscles were measured for five male subjects while they made pleasant, unpleasant and neutral facial expressions, and while they were presented pleasant, disgusting, and neutral odors. Pleasant expression and odor activated zygomatic muscles while unpleasant expression and odor increased corrugator muscle activity.

  • PDF

A Generation Methodology of Facial Expressions for Avatar Communications (아바타 통신에서의 얼굴 표정의 생성 방법)

  • Kim Jin-Yong;Yoo Jae-Hwi
    • Journal of the Korea Society of Computer and Information
    • /
    • v.10 no.3 s.35
    • /
    • pp.55-64
    • /
    • 2005
  • The avatar can be used as an auxiliary methodology of text and image communications in cyber space. An intelligent communication method can also be utilized to achieve real-time communication, where intelligently coded data (joint angles for arm gestures and action units for facial emotions) are transmitted instead of real or compressed pictures. In this paper. for supporting the action of arm and leg gestures, a method of generating the facial expressions that can represent sender's emotions is provided. The facial expression can be represented by Action Unit(AU), in this paper we suggest the methodology of finding appropriate AUs in avatar models that have various shape and structure. And, to maximize the efficiency of emotional expressions, a comic-style facial model having only eyebrows, eyes, nose, and mouth is employed. Then generation of facial emotion animation with the parameters is also investigated.

  • PDF

Korean Facial Expression Emotion Recognition based on Image Meta Information (이미지 메타 정보 기반 한국인 표정 감정 인식)

  • Hyeong Ju Moon;Myung Jin Lim;Eun Hee Kim;Ju Hyun Shin
    • Smart Media Journal
    • /
    • v.13 no.3
    • /
    • pp.9-17
    • /
    • 2024
  • Due to the recent pandemic and the development of ICT technology, the use of non-face-to-face and unmanned systems is expanding, and it is very important to understand emotions in communication in non-face-to-face situations. As emotion recognition methods for various facial expressions are required to understand emotions, artificial intelligence-based research is being conducted to improve facial expression emotion recognition in image data. However, existing research on facial expression emotion recognition requires high computing power and a lot of learning time because it utilizes a large amount of data to improve accuracy. To improve these limitations, this paper proposes a method of recognizing facial expressions using age and gender, which are image meta information, as a method of recognizing facial expressions with even a small amount of data. For facial expression emotion recognition, a face was detected using the Yolo Face model from the original image data, and age and gender were classified through the VGG model based on image meta information, and then seven emotions were recognized using the EfficientNet model. The accuracy of the proposed data classification learning model was higher as a result of comparing the meta-information-based data classification model with the model trained with all data.

Facial Expression Recognition with Fuzzy C-Means Clusstering Algorithm and Neural Network Based on Gabor Wavelets

  • Youngsuk Shin;Chansup Chung;Lee, Yillbyung
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 2000.04a
    • /
    • pp.126-132
    • /
    • 2000
  • This paper presents a facial expression recognition based on Gabor wavelets that uses a fuzzy C-means(FCM) clustering algorithm and neural network. Features of facial expressions are extracted to two steps. In the first step, Gabor wavelet representation can provide edges extraction of major face components using the average value of the image's 2-D Gabor wavelet coefficient histogram. In the next step, we extract sparse features of facial expressions from the extracted edge information using FCM clustering algorithm. The result of facial expression recognition is compared with dimensional values of internal stated derived from semantic ratings of words related to emotion. The dimensional model can recognize not only six facial expressions related to Ekman's basic emotions, but also expressions of various internal states.

  • PDF

The Implementation and Analysis of Facial Expression Customization for a Social Robot (소셜 로봇의 표정 커스터마이징 구현 및 분석)

  • Jiyeon Lee;Haeun Park;Temirlan Dzhoroev;Byounghern Kim;Hui Sung Lee
    • The Journal of Korea Robotics Society
    • /
    • v.18 no.2
    • /
    • pp.203-215
    • /
    • 2023
  • Social robots, which are mainly used by individuals, emphasize the importance of human-robot relationships (HRR) more compared to other types of robots. Emotional expression in robots is one of the key factors that imbue HRR with value; emotions are mainly expressed through the face. However, because of cultural and preference differences, the desired robot facial expressions differ subtly depending on the user. It was expected that a robot facial expression customization tool may mitigate such difficulties and consequently improve HRR. To prove this, we created a robot facial expression customization tool and a prototype robot. We implemented a suitable emotion engine for generating robot facial expressions in a dynamic human-robot interaction setting. We conducted experiments and the users agreed that the availability of a customized version of the robot has a more positive effect on HRR than a predefined version of the robot. Moreover, we suggest recommendations for future improvements of the customization process of robot facial expression.

Emotion Recognition using Short-Term Multi-Physiological Signals

  • Kang, Tae-Koo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.3
    • /
    • pp.1076-1094
    • /
    • 2022
  • Technology for emotion recognition is an essential part of human personality analysis. To define human personality characteristics, the existing method used the survey method. However, there are many cases where communication cannot make without considering emotions. Hence, emotional recognition technology is an essential element for communication but has also been adopted in many other fields. A person's emotions are revealed in various ways, typically including facial, speech, and biometric responses. Therefore, various methods can recognize emotions, e.g., images, voice signals, and physiological signals. Physiological signals are measured with biological sensors and analyzed to identify emotions. This study employed two sensor types. First, the existing method, the binary arousal-valence method, was subdivided into four levels to classify emotions in more detail. Then, based on the current techniques classified as High/Low, the model was further subdivided into multi-levels. Finally, signal characteristics were extracted using a 1-D Convolution Neural Network (CNN) and classified sixteen feelings. Although CNN was used to learn images in 2D, sensor data in 1D was used as the input in this paper. Finally, the proposed emotional recognition system was evaluated by measuring actual sensors.

Generation of Robot Facial Gestures based on Facial Actions and Animation Principles (Facial Actions 과 애니메이션 원리에 기반한 로봇의 얼굴 제스처 생성)

  • Park, Jeong Woo;Kim, Woo Hyun;Lee, Won Hyong;Lee, Hui Sung;Chung, Myung Jin
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.5
    • /
    • pp.495-502
    • /
    • 2014
  • This paper proposes a method to generate diverse robot facial expressions and facial gestures in order to help long-term HRI. First, nine basic dynamics for diverse robot facial expressions are determined based on the dynamics of human facial expressions and principles of animation for even identical emotions. In the second stage, facial actions are added to express facial gestures such as sniffling or wailing loudly corresponding to sadness, laughing aloud or smiling corresponding to happiness, etc. To evaluate the effectiveness of our approach, we compared the facial expressions of the developed robot when the proposed method is used or not. The results of the survey showed that the proposed method can help robots generate more realistic facial expressions.

Facial Expression Training Digital Therapeutics for Autistic Children (자폐아를 위한 표정 훈련 디지털 치료제)

  • Jiyeon Park;Kyoung Won Lee;Seong Yong Ohm
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.1
    • /
    • pp.581-586
    • /
    • 2023
  • Recently a drama that features a lawyer with autism spectrum disorder has attracted a lot of attention, raising interest in the difficulties faced by people with autism spectrum disorders. If the Autism spectrum gets detected early and proper education and treatment, the prognosis can be improved, so the development of the treatment is urgently needed. Drugs currently used to treat autism spectrum often have side effects, so Digital Therapeutics that have no side effects and can be supplied in large quantities are drawing attention. In this paper, we introduce 'AEmotion', an application and a Digital Therapeutic that provides emotion and facial expression learning for toddlers with an autism spectrum disorder. This system is developed as an application for smartphones to increase interest in training autistic children and to test easily. Using machine learning, this system consists of three main stages: an 'emotion learning' step to learn emotions with facial expression cards, an 'emotion identification' step to check if the user understood emotions and facial expressions properly, and an 'expression training' step to make appropriate facial expressions. Through this system, it is expected that it will help autistic toddlers who have difficulties with social interactions by having problems recognizing facial expressions and emotions.

Comic Emotional Expression for Effective Sign-Language Communications (효율적인 수화 통신을 위한 코믹한 감정 표현)

  • ;;Shin Tanahashi;Yoshinao Aoki
    • Proceedings of the IEEK Conference
    • /
    • 1999.06a
    • /
    • pp.651-654
    • /
    • 1999
  • In this paper we propose an emotional expression method using a comic model and special marks for effective sign-language communications. Until now we have investigated to produce more realistic facial and emotional expression. When representing only emotional expression, however, a comic expression could be better than the real picture of a face. The comic face is a comic-style expression model in which almost components except the necessary parts like eyebrows, eyes, nose and mouth are discarded. In the comic model, we can use some special marks for the purpose of emphasizing various emotions. We represent emotional expression using Action Units(AU) of Facial Action Coding System(FACS) and define Special Unit(SU) for emphasizing the emotions. Experimental results show a possibility that the proposed method could be used efficiently for sign-language image communications.

  • PDF