• Title/Summary/Keyword: Facial Emotions

Search Result 159, Processing Time 0.026 seconds

Emotion Recognition and Expression System of User using Multi-Modal Sensor Fusion Algorithm (다중 센서 융합 알고리즘을 이용한 사용자의 감정 인식 및 표현 시스템)

  • Yeom, Hong-Gi;Joo, Jong-Tae;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.1
    • /
    • pp.20-26
    • /
    • 2008
  • As they have more and more intelligence robots or computers these days, so the interaction between intelligence robot(computer) - human is getting more and more important also the emotion recognition and expression are indispensable for interaction between intelligence robot(computer) - human. In this paper, firstly we extract emotional features at speech signal and facial image. Secondly we apply both BL(Bayesian Learning) and PCA(Principal Component Analysis), lastly we classify five emotions patterns(normal, happy, anger, surprise and sad) also, we experiment with decision fusion and feature fusion to enhance emotion recognition rate. The decision fusion method experiment on emotion recognition that result values of each recognition system apply Fuzzy membership function and the feature fusion method selects superior features through SFS(Sequential Forward Selection) method and superior features are applied to Neural Networks based on MLP(Multi Layer Perceptron) for classifying five emotions patterns. and recognized result apply to 2D facial shape for express emotion.

Korean Emotional Speech and Facial Expression Database for Emotional Audio-Visual Speech Generation (대화 영상 생성을 위한 한국어 감정음성 및 얼굴 표정 데이터베이스)

  • Baek, Ji-Young;Kim, Sera;Lee, Seok-Pil
    • Journal of Internet Computing and Services
    • /
    • v.23 no.2
    • /
    • pp.71-77
    • /
    • 2022
  • In this paper, a database is collected for extending the speech synthesis model to a model that synthesizes speech according to emotions and generating facial expressions. The database is divided into male and female data, and consists of emotional speech and facial expressions. Two professional actors of different genders speak sentences in Korean. Sentences are divided into four emotions: happiness, sadness, anger, and neutrality. Each actor plays about 3300 sentences per emotion. A total of 26468 sentences collected by filming this are not overlap and contain expression similar to the corresponding emotion. Since building a high-quality database is important for the performance of future research, the database is assessed on emotional category, intensity, and genuineness. In order to find out the accuracy according to the modality of data, the database is divided into audio-video data, audio data, and video data.

Real-time Recognition System of Facial Expressions Using Principal Component of Gabor-wavelet Features (표정별 가버 웨이블릿 주성분특징을 이용한 실시간 표정 인식 시스템)

  • Yoon, Hyun-Sup;Han, Young-Joon;Hahn, Hern-Soo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.6
    • /
    • pp.821-827
    • /
    • 2009
  • Human emotion can be reflected by their facial expressions. So, it is one of good ways to understand people's emotions by recognizing their facial expressions. General recognition system of facial expressions had selected interesting points, and then only extracted features without analyzing physical meanings. They takes a long time to find interesting points, and it is hard to estimate accurate positions of these feature points. And in order to implement a recognition system of facial expressions on real-time embedded system, it is needed to simplify the algorithm and reduce the using resources. In this paper, we propose a real-time recognition algorithm of facial expressions that project the grid points on an expression space based on Gabor wavelet feature. Facial expression is simply described by feature vectors on the expression space, and is classified by an neural network with its resources dramatically reduced. The proposed system deals 5 expressions: anger, happiness, neutral, sadness, and surprise. In experiment, average execution time is 10.251 ms and recognition rate is measured as 87~93%.

Emotion Recognition Based on Facial Expression by using Context-Sensitive Bayesian Classifier (상황에 민감한 베이지안 분류기를 이용한 얼굴 표정 기반의 감정 인식)

  • Kim, Jin-Ok
    • The KIPS Transactions:PartB
    • /
    • v.13B no.7 s.110
    • /
    • pp.653-662
    • /
    • 2006
  • In ubiquitous computing that is to build computing environments to provide proper services according to user's context, human being's emotion recognition based on facial expression is used as essential means of HCI in order to make man-machine interaction more efficient and to do user's context-awareness. This paper addresses a problem of rigidly basic emotion recognition in context-sensitive facial expressions through a new Bayesian classifier. The task for emotion recognition of facial expressions consists of two steps, where the extraction step of facial feature is based on a color-histogram method and the classification step employs a new Bayesian teaming algorithm in performing efficient training and test. New context-sensitive Bayesian learning algorithm of EADF(Extended Assumed-Density Filtering) is proposed to recognize more exact emotions as it utilizes different classifier complexities for different contexts. Experimental results show an expression classification accuracy of over 91% on the test database and achieve the error rate of 10.6% by modeling facial expression as hidden context.

Analysis of Visual Attention in Negative Emotional Expression Emoticons using Eye-Tracking Device (시선추적 장치를 활용한 부정적 감정표현 이모티콘의 시각적 주의집중도 분석)

  • Park, Minhee;Kwon, Mahnwoo;Hwang, Mikyung
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.11
    • /
    • pp.1580-1587
    • /
    • 2021
  • Currently, the development and sale of various emoticons has given users a wider range of choices, but a systematic and specific approach to the recognition and use of emoticons by actual users is lacking. Therefore, this study tried to investigate the subjective perception and visual attention concentration of actual users on negative emotional expression emoticons through a survey and eye tracking experiment. First, as a result of subjective recognition analysis, it was found that emoticons are frequently used because their appearance is important, and they can express various emotions in a fun and interesting way. In particular, it was found that emoticons that express negative emotions are often used because they can indirectly express negative emotions through various and concretely expressed visual elements. Next, as a result of the eye tracking experiment, it was found that the negative emotional expression emoticons focused on the large elements that visually emphasized or emphasized the emotional expression elements, and it was found that the focus was not only on the facial expression but also on the physical behavioral responses and language of expression of emotions. These results will be used as basic data to understand users' perceptions and utilization of the diversified emoticons. In addition, for the long-term growth and activation of the emoticon industry market in the future, continuous research should be conducted to understand the various emotions of real users and to develop differentiated emoticons that can maximize the empathy effect appropriate to the situation.

Representation of Facial Expressions of Different Ages: A Multidimensional Scaling Study (다양한 연령의 얼굴 정서 표상: 다차원척도법 연구)

  • Kim, Jongwan
    • Science of Emotion and Sensibility
    • /
    • v.24 no.3
    • /
    • pp.71-80
    • /
    • 2021
  • Previous studies using facial expressions have revealed valence and arousal as two core dimensions of affective space. However, it remains unknown if the two dimensional structure is consistent across ages. This study investigated affective dimensions using six facial expressions (angry, disgusted, fearful, happy, neutral, and sad) at three ages (young, middle-aged, and old). Several studies previously required participants to directly rate subjective similarity between facial expression pairs. In this study, we collected indirect measures by asking participants to decide if a pair of two stimuli conveyed the same emotions. Multidimensional scaling showed that "angry-disgusted" and "sad-disgusted" pairs are similar at all three ages. In addition, "angry-sad," "angry-neutral," "neutral-sad," and "disgusted-fearful" pairs were similar at old age. When two faces in a pair reflect the same emotion, "sad" was the most inaccurate in old age, suggesting that the ability to recognize "sad" decreases with old age. This study suggested that the general two-core dimension structure is robust across all age groups with the exception of specific emotions.

Facial Expression Research according to Arbitrary Changes in Emotions through Visual Analytic Method (영상분석법에 의한 자의적 정서변화에 따른 표정연구)

  • Byun, In-Kyung;Lee, Jae-Ho
    • The Journal of the Korea Contents Association
    • /
    • v.13 no.10
    • /
    • pp.71-81
    • /
    • 2013
  • Facial expressions decide an image for the individual, and the ability to interpret emotion from facial expressions is the core of human relations, hence recognizing emotion through facial expression is important enough to change attitude and decisions between individuals within social relations. Children with unstable attachment development, seniors, autistic group, ADHD children and depression group showed low performance results in facial expression recognizing ability tasks, and active interventions with such groups anticipates possibilities of prevention and therapeutic effects for psychological disabilities. The quantified figures that show detailed change in position of lips, eyes and cheeks anticipates for possible applications in diverse fields such as human sensibility ergonomics, korean culture and art contents, therapeutical and educational applications to overcome psychological disabilities and as methods of non-verbal communication in the globalizing multicultural society to overcome cultural differences.

A Study on the Applicability of Facial Action Coding System for Product Design Process (제품 디자인 프로세스를 위한 표정 부호화 시스템(FACS) 적용성에 대한 연구)

  • Huang, Chao;Go, Jung-Wook
    • The Journal of the Korea Contents Association
    • /
    • v.19 no.3
    • /
    • pp.80-88
    • /
    • 2019
  • With more emphasis on emotional communication with users in product design field, designers' clear and prompt grasp of user's emotion has become the core activity in product design research. To increase the flexibility applying emotion measurement in the process of product design, this study has used Facial Action Coding System (FACS) of behavioral emotion measurement method in product design evaluation. To select specimens, it has flexibly used the emotional product Image Map. Then this study has selected six product irritants inducing positive, negative and neutral emotions, and conducted FACS experiment with ordinary product users of 20 generations as the experimental subject, and analyzed users' emotional state in response to the irritants through their facial expressions. It also analyzes the advantages and disadvantages of FACS in the process of product design, such as "recording users' unconscious facial expressions" and puts forward some applicable schemes, such as "choosing a product stimulus with high user response". It is expected that this paper can be helpful to the flexibility of FACS as a method to predict user's emotion in advance at the trial stage of product design before launching them to the market.

Hybrid-Feature Extraction for the Facial Emotion Recognition

  • Byun, Kwang-Sub;Park, Chang-Hyun;Sim, Kwee-Bo;Jeong, In-Cheol;Ham, Ho-Sang
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1281-1285
    • /
    • 2004
  • There are numerous emotions in the human world. Human expresses and recognizes their emotion using various channels. The example is an eye, nose and mouse. Particularly, in the emotion recognition from facial expression they can perform the very flexible and robust emotion recognition because of utilization of various channels. Hybrid-feature extraction algorithm is based on this human process. It uses the geometrical feature extraction and the color distributed histogram. And then, through the independently parallel learning of the neural-network, input emotion is classified. Also, for the natural classification of the emotion, advancing two-dimensional emotion space is introduced and used in this paper. Advancing twodimensional emotion space performs a flexible and smooth classification of emotion.

  • PDF

A Gesture-Emotion Keyframe Editor for sign-Language Communication between Avatars of Korean and Japanese on the Internet

  • Kim, Sang-Woon;Lee, Yung-Who;Lee, Jong-Woo;Aoki, Yoshinao
    • Proceedings of the IEEK Conference
    • /
    • 2000.07b
    • /
    • pp.831-834
    • /
    • 2000
  • The sign-language tan be used a9 an auxiliary communication means between avatars of different languages. At that time an intelligent communication method can be also utilized to achieve real-time communication, where intelligently coded data (joint angles for arm gestures and action units for facial emotions) are transmitted instead of real pictures. In this paper we design a gesture-emotion keyframe editor to provide the means to get easily the parameter values. To calculate both joint angles of the arms and the hands and to goner-ate the in keyframes realistically, a transformation matrix of inverse kinematics and some kinds of constraints are applied. Also, to edit emotional expressions efficiently, a comic-style facial model having only eyebrows, eyes nose, and mouth is employed. Experimental results show a possibility that the editor could be used for intelligent sign-language image communications between different lan-guages.

  • PDF