• Title/Summary/Keyword: Face Expressions

Search Result 184, Processing Time 0.03 seconds

A facial expressions recognition algorithm using image area segmentation and face element (영역 분할과 판단 요소를 이용한 표정 인식 알고리즘)

  • Lee, Gye-Jeong;Jeong, Ji-Yong;Hwang, Bo-Hyun;Choi, Myung-Ryul
    • Journal of Digital Convergence
    • /
    • v.12 no.12
    • /
    • pp.243-248
    • /
    • 2014
  • In this paper, we propose a method to recognize the facial expressions by selecting face elements and finding its status. The face elements are selected by using image area segmentation method and the facial expression is decided by using the normal distribution of the change rate of the face elements. In order to recognize the proper facial expression, we have built database of facial expressions of 90 people and propose a method to decide one of the four expressions (happy, anger, stress, and sad). The proposed method has been simulated and verified by face element detection rate and facial expressions recognition rate.

Japanese Political Interviews: The Integration of Conversation Analysis and Facial Expression Analysis

  • Kinoshita, Ken
    • Asian Journal for Public Opinion Research
    • /
    • v.8 no.3
    • /
    • pp.180-196
    • /
    • 2020
  • This paper considers Japanese political interviews to integrate conversation and facial expression analysis. The behaviors of political leaders will be disclosed by analyzing questions and responses by using the turn-taking system in conversation analysis. Additionally, audiences who cannot understand verbal expressions alone will understand the psychology of political leaders by analyzing their facial expressions. Integral analyses promote understanding of the types of facial and verbal expressions of politicians and their effect on public opinion. Politicians have unique techniques to convince people. If people do not know these techniques and ways of various expressions, they will become confused, and politics may fall into populism as a result. To avoid this, a complete understanding of verbal and non-verbal behaviors is needed. This paper presents two analyses. The first analysis is a qualitative analysis that deals with Prime Minister Shinzō Abe and shows that differences between words and happy facial expressions occur. That result indicates that Abe expresses disgusted facial expressions when faced with the same question from an interviewer. The second is a quantitative multiple regression analysis where the dependent variables are six facial expressions: happy, sad, angry, surprised, scared, and disgusted. The independent variable is when politicians have a threat to face. Political interviews that directly inform audiences are used as a tool by politicians. Those interviews play an important role in modelling public opinion. The audience watches political interviews, and these mold support to the party. Watching political interviews contributes to the decision to support the political party when they vote in a coming election.

Transfer Learning for Face Emotions Recognition in Different Crowd Density Situations

  • Amirah Alharbi
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.4
    • /
    • pp.26-34
    • /
    • 2024
  • Most human emotions are conveyed through facial expressions, which represent the predominant source of emotional data. This research investigates the impact of crowds on human emotions by analysing facial expressions. It examines how crowd behaviour, face recognition technology, and deep learning algorithms contribute to understanding the emotional change according to different level of crowd. The study identifies common emotions expressed during congestion, differences between crowded and less crowded areas, changes in facial expressions over time. The findings can inform urban planning and crowd event management by providing insights for developing coping mechanisms for affected individuals. However, limitations and challenges in using reliable facial expression analysis are also discussed, including age and context-related differences.

Redundant Parallel Hopfield Network Configurations: A New Approach to the Two-Dimensional Face Recognitions (병렬 다중 홉 필드 네트워크 구성으로 인한 2-차원적 얼굴인식 기법에 대한 새로운 제안)

  • Kim, Yong Taek;Deo, Kiatama
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.7 no.2
    • /
    • pp.63-68
    • /
    • 2018
  • Interests in face recognition area have been increasing due to diverse emerging applications. Face recognition algorithm from a two-dimensional source could be challenging in dealing with some circumstances such as face orientation, illuminance degree, face details such as with/without glasses and various expressions, like, smiling or crying. Hopfield Network capabilities have been used specially within the areas of recalling patterns, generalizations, familiarity recognitions and error corrections. Based on those abilities, a specific experimentation is conducted in this paper to apply the Redundant Parallel Hopfield Network on a face recognition problem. This new design has been experimentally confirmed and tested to be robust in any kind of practical situations.

A Face Robot Actuated with Artiflcial Muscle (인공근육을 이용한 얼굴로봇)

  • 곽종원;지호준;정광목;남재도;전재욱;최혁렬
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.10 no.11
    • /
    • pp.991-999
    • /
    • 2004
  • Face robots capable of expressing their emotional status, can be adopted as an efficient tool for friendly communication between the human and the machine. In this paper, we present a face robot actuated with artificial muscle based on dielectric elastomer. By exploiting the properties of polymers, it is possible to actuate the covering skin, eyes as well as provide human-like expressivity without employing complicated mechanisms. The robot is driven by seven types of actuator modules such as eye, eyebrow, eyelid, brow, cheek, jaw and neck module corresponding to movements of facial muscles. Although they are only part of the whole set of facial motions, our approach is sufficient to generate six fundamental facial expressions such as surprise, fear, anger, disgust, sadness, and happiness. Each module communicates with the others via CAN communication protocol fur the desired emotional expressions, the facial motions are generated by combining the motions of each actuator module. A prototype of the robot has been developed and several experiments have been conducted to validate its feasibility.

Face recognition invariant to partial occlusions

  • Aisha, Azeem;Muhammad, Sharif;Hussain, Shah Jamal;Mudassar, Raza
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.7
    • /
    • pp.2496-2511
    • /
    • 2014
  • Face recognition is considered a complex biometrics in the field of image processing mainly due to the constraints imposed by variation in the appearance of facial images. These variations in appearance are affected by differences in expressions and/or occlusions (sunglasses, scarf etc.). This paper discusses incremental Kernel Fisher Discriminate Analysis on sub-classes for dealing with partial occlusions and variant expressions. This framework focuses on the division of classes into fixed size sub-classes for effective feature extraction. For this purpose, it modifies the traditional Linear Discriminant Analysis into incremental approach in the kernel space. Experiments are performed on AR, ORL, Yale B and MIT-CBCL face databases. The results show a significant improvement in face recognition.

Song Player by Distance Measurement from Face (얼굴에서 거리 측정에 의한 노래 플레이어)

  • Shin, Seong-Yoon;Lee, Min-Hye;Shin, Kwang-Seong;Lee, Hyun-Chang
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.667-669
    • /
    • 2022
  • In this paper, Face Song Player, which is a system that recognizes the facial expression of an individual and plays music that is appropriate for such person, is presented. It studies information on the facial contour lines and extracts an average, and acquires the facial shape information. MUCT DB was used as the DB for learning. For the recognition of facial expression, an algorithm was designed by using the differences in the characteristics of each of the expressions on the basis of expressionless images.

  • PDF

Eye and Mouth Images Based Facial Expressions Recognition Using PCA and Template Matching (PCA와 템플릿 정합을 사용한 눈 및 입 영상 기반 얼굴 표정 인식)

  • Woo, Hyo-Jeong;Lee, Seul-Gi;Kim, Dong-Woo;Ryu, Sung-Pil;Ahn, Jae-Hyeong
    • The Journal of the Korea Contents Association
    • /
    • v.14 no.11
    • /
    • pp.7-15
    • /
    • 2014
  • This paper proposed a recognition algorithm of human facial expressions using the PCA and the template matching. Firstly, face image is acquired using the Haar-like feature mask from an input image. The face image is divided into two images. One is the upper image including eye and eyebrow. The other is the lower image including mouth and jaw. The extraction of facial components, such as eye and mouth, begins getting eye image and mouth image. Then an eigenface is produced by the PCA training process with learning images. An eigeneye and an eigenmouth are produced from the eigenface. The eye image is obtained by the template matching the upper image with the eigeneye, and the mouth image is obtained by the template matching the lower image with the eigenmouth. The face recognition uses geometrical properties of the eye and mouth. The simulation results show that the proposed method has superior extraction ratio rather than previous results; the extraction ratio of mouth image is particularly reached to 99%. The face recognition system using the proposed method shows that recognition ratio is greater than 80% about three facial expressions, which are fright, being angered, happiness.

A Study on Improvement of Face Recognition Rate with Transformation of Various Facial Poses and Expressions (얼굴의 다양한 포즈 및 표정의 변환에 따른 얼굴 인식률 향상에 관한 연구)

  • Choi Jae-Young;Whangbo Taeg-Keun;Kim Nak-Bin
    • Journal of Internet Computing and Services
    • /
    • v.5 no.6
    • /
    • pp.79-91
    • /
    • 2004
  • Various facial pose detection and recognition has been a difficult problem. The problem is due to the fact that the distribution of various poses in a feature space is mere dispersed and more complicated than that of frontal faces, This thesis proposes a robust pose-expression-invariant face recognition method in order to overcome insufficiency of the existing face recognition system. First, we apply the TSL color model for detecting facial region and estimate the direction of face using facial features. The estimated pose vector is decomposed into X-V-Z axes, Second, the input face is mapped by deformable template using this vectors and 3D CANDIDE face model. Final. the mapped face is transformed to frontal face which appropriates for face recognition by the estimated pose vector. Through the experiments, we come to validate the application of face detection model and the method for estimating facial poses, Moreover, the tests show that recognition rate is greatly boosted through the normalization of the poses and expressions.

  • PDF

Text-driven Speech Animation with Emotion Control

  • Chae, Wonseok;Kim, Yejin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.8
    • /
    • pp.3473-3487
    • /
    • 2020
  • In this paper, we present a new approach to creating speech animation with emotional expressions using a small set of example models. To generate realistic facial animation, two example models called key visemes and expressions are used for lip-synchronization and facial expressions, respectively. The key visemes represent lip shapes of phonemes such as vowels and consonants while the key expressions represent basic emotions of a face. Our approach utilizes a text-to-speech (TTS) system to create a phonetic transcript for the speech animation. Based on a phonetic transcript, a sequence of speech animation is synthesized by interpolating the corresponding sequence of key visemes. Using an input parameter vector, the key expressions are blended by a method of scattered data interpolation. During the synthesizing process, an importance-based scheme is introduced to combine both lip-synchronization and facial expressions into one animation sequence in real time (over 120Hz). The proposed approach can be applied to diverse types of digital content and applications that use facial animation with high accuracy (over 90%) in speech recognition.