• Title/Summary/Keyword: Recognition of Facial Expressions

Search Result 120, Processing Time 0.03 seconds

Facial Expression Recognition with Fuzzy C-Means Clusstering Algorithm and Neural Network Based on Gabor Wavelets

  • Youngsuk Shin;Chansup Chung;Lee, Yillbyung
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 2000.04a
    • /
    • pp.126-132
    • /
    • 2000
  • This paper presents a facial expression recognition based on Gabor wavelets that uses a fuzzy C-means(FCM) clustering algorithm and neural network. Features of facial expressions are extracted to two steps. In the first step, Gabor wavelet representation can provide edges extraction of major face components using the average value of the image's 2-D Gabor wavelet coefficient histogram. In the next step, we extract sparse features of facial expressions from the extracted edge information using FCM clustering algorithm. The result of facial expression recognition is compared with dimensional values of internal stated derived from semantic ratings of words related to emotion. The dimensional model can recognize not only six facial expressions related to Ekman's basic emotions, but also expressions of various internal states.

  • PDF

An Intelligent Emotion Recognition Model Using Facial and Bodily Expressions

  • Jae Kyeong Kim;Won Kuk Park;Il Young Choi
    • Asia pacific journal of information systems
    • /
    • v.27 no.1
    • /
    • pp.38-53
    • /
    • 2017
  • As sensor technologies and image processing technologies make collecting information on users' behavior easy, many researchers have examined automatic emotion recognition based on facial expressions, body expressions, and tone of voice, among others. Specifically, many studies have used normal cameras in the multimodal case using facial and body expressions. Thus, previous studies used a limited number of information because normal cameras generally produce only two-dimensional images. In the present research, we propose an artificial neural network-based model using a high-definition webcam and Kinect to recognize users' emotions from facial and bodily expressions when watching a movie trailer. We validate the proposed model in a naturally occurring field environment rather than in an artificially controlled laboratory environment. The result of this research will be helpful in the wide use of emotion recognition models in advertisements, exhibitions, and interactive shows.

Real-time Recognition System of Facial Expressions Using Principal Component of Gabor-wavelet Features (표정별 가버 웨이블릿 주성분특징을 이용한 실시간 표정 인식 시스템)

  • Yoon, Hyun-Sup;Han, Young-Joon;Hahn, Hern-Soo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.6
    • /
    • pp.821-827
    • /
    • 2009
  • Human emotion can be reflected by their facial expressions. So, it is one of good ways to understand people's emotions by recognizing their facial expressions. General recognition system of facial expressions had selected interesting points, and then only extracted features without analyzing physical meanings. They takes a long time to find interesting points, and it is hard to estimate accurate positions of these feature points. And in order to implement a recognition system of facial expressions on real-time embedded system, it is needed to simplify the algorithm and reduce the using resources. In this paper, we propose a real-time recognition algorithm of facial expressions that project the grid points on an expression space based on Gabor wavelet feature. Facial expression is simply described by feature vectors on the expression space, and is classified by an neural network with its resources dramatically reduced. The proposed system deals 5 expressions: anger, happiness, neutral, sadness, and surprise. In experiment, average execution time is 10.251 ms and recognition rate is measured as 87~93%.

Personalized Facial Expression Recognition System using Fuzzy Neural Networks and robust Image Processing (퍼지 신경망과 강인한 영상 처리를 이용한 개인화 얼굴 표정 인식 시스템)

  • 김대진;김종성;변증남
    • Proceedings of the IEEK Conference
    • /
    • 2002.06c
    • /
    • pp.25-28
    • /
    • 2002
  • This paper introduce a personalized facial expression recognition system. Many previous works on facial expression recognition system focus on the formal six universal facial expressions. However, it is very difficult to make such expressions for normal person without much effort and training. And in these days, the personalized service is also mainly focused by many researchers in various fields. Thus, we Propose a novel facial expression recognition system with fuzzy neural networks and robust image processing.

  • PDF

A Study on the Facial Expression Recognition using Deep Learning Technique

  • Jeong, Bong Jae;Kang, Min Soo;Jung, Yong Gyu
    • International Journal of Advanced Culture Technology
    • /
    • v.6 no.1
    • /
    • pp.60-67
    • /
    • 2018
  • In this paper, the pattern of extracting the same expression is proposed by using the Android intelligent device to identify the facial expression. The understanding and expression of expression are very important to human computer interaction, and the technology to identify human expressions is very popular. Instead of searching for the symbols that users often use, you can identify facial expressions with a camera, which is a useful technique that can be used now. This thesis puts forward the technology of the third data is available on the website of the set, use the content to improve the infrastructure of the facial expression recognition accuracy, to improve the synthesis of neural network algorithm, making the facial expression recognition model, the user's facial expressions and similar expressions, reached 66%. It doesn't need to search for symbols. If you use the camera to recognize the expression, it will appear symbols immediately. So, this service is the symbols used when people send messages to others, and it can feel a lot of convenience. In countless symbols, there is no need to find symbols, which is an increasing trend in deep learning. So, we need to use more suitable algorithm for expression recognition, and then improve accuracy.

Learning Directional LBP Features and Discriminative Feature Regions for Facial Expression Recognition (얼굴 표정 인식을 위한 방향성 LBP 특징과 분별 영역 학습)

  • Kang, Hyunwoo;Lim, Kil-Taek;Won, Chulho
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.5
    • /
    • pp.748-757
    • /
    • 2017
  • In order to recognize the facial expressions, good features that can express the facial expressions are essential. It is also essential to find the characteristic areas where facial expressions appear discriminatively. In this study, we propose a directional LBP feature for facial expression recognition and a method of finding directional LBP operation and feature region for facial expression classification. The proposed directional LBP features to characterize facial fine micro-patterns are defined by LBP operation factors (direction and size of operation mask) and feature regions through AdaBoost learning. The facial expression classifier is implemented as a SVM classifier based on learned discriminant region and directional LBP operation factors. In order to verify the validity of the proposed method, facial expression recognition performance was measured in terms of accuracy, sensitivity, and specificity. Experimental results show that the proposed directional LBP and its learning method are useful for facial expression recognition.

A facial expressions recognition algorithm using image area segmentation and face element (영역 분할과 판단 요소를 이용한 표정 인식 알고리즘)

  • Lee, Gye-Jeong;Jeong, Ji-Yong;Hwang, Bo-Hyun;Choi, Myung-Ryul
    • Journal of Digital Convergence
    • /
    • v.12 no.12
    • /
    • pp.243-248
    • /
    • 2014
  • In this paper, we propose a method to recognize the facial expressions by selecting face elements and finding its status. The face elements are selected by using image area segmentation method and the facial expression is decided by using the normal distribution of the change rate of the face elements. In order to recognize the proper facial expression, we have built database of facial expressions of 90 people and propose a method to decide one of the four expressions (happy, anger, stress, and sad). The proposed method has been simulated and verified by face element detection rate and facial expressions recognition rate.

Discrimination of Emotional States In Voice and Facial Expression

  • Kim, Sung-Ill;Yasunari Yoshitomi;Chung, Hyun-Yeol
    • The Journal of the Acoustical Society of Korea
    • /
    • v.21 no.2E
    • /
    • pp.98-104
    • /
    • 2002
  • The present study describes a combination method to recognize the human affective states such as anger, happiness, sadness, or surprise. For this, we extracted emotional features from voice signals and facial expressions, and then trained them to recognize emotional states using hidden Markov model (HMM) and neural network (NN). For voices, we used prosodic parameters such as pitch signals, energy, and their derivatives, which were then trained by HMM for recognition. For facial expressions, on the other hands, we used feature parameters extracted from thermal and visible images, and these feature parameters were then trained by NN for recognition. The recognition rates for the combined parameters obtained from voice and facial expressions showed better performance than any of two isolated sets of parameters. The simulation results were also compared with human questionnaire results.

A Study on the Emoticon Extraction based on Facial Expression Recognition using Deep Learning Technique (딥 러닝 기술 이용한 얼굴 표정 인식에 따른 이모티콘 추출 연구)

  • Jeong, Bong-Jae;Zhang, Fan
    • Korean Journal of Artificial Intelligence
    • /
    • v.5 no.2
    • /
    • pp.43-53
    • /
    • 2017
  • In this paper, the pattern of extracting the same expression is proposed by using the Android intelligent device to identify the facial expression. The understanding and expression of expression are very important to human computer interaction, and the technology to identify human expressions is very popular. Instead of searching for the emoticons that users often use, you can identify facial expressions with acamera, which is a useful technique that can be used now. This thesis puts forward the technology of the third data is available on the website of the set, use the content to improve the infrastructure of the facial expression recognition accuracy, in order to improve the synthesis of neural network algorithm, making the facial expression recognition model, the user's facial expressions and similar e xpressions, reached 66%.It doesn't need to search for emoticons. If you use the camera to recognize the expression, itwill appear emoticons immediately. So this service is the emoticons used when people send messages to others, and it can feel a lot of convenience. In countless emoticons, there is no need to find emoticons, which is an increasing trend in deep learning. So we need to use more suitable algorithm for expression recognition, and then improve accuracy.

Effects of the facial expression presenting types and facial areas on the emotional recognition (얼굴 표정의 제시 유형과 제시 영역에 따른 정서 인식 효과)

  • Lee, Jung-Hun;Park, Soo-Jin;Han, Kwang-Hee;Ghim, Hei-Rhee;Cho, Kyung-Ja
    • Science of Emotion and Sensibility
    • /
    • v.10 no.1
    • /
    • pp.113-125
    • /
    • 2007
  • The aim of the experimental studies described in this paper is to investigate the effects of the face/eye/mouth areas using dynamic facial expressions and static facial expressions on emotional recognition. Using seven-seconds-displays, experiment 1 for basic emotions and experiment 2 for complex emotions are executed. The results of two experiments supported that the effects of dynamic facial expressions are higher than static one on emotional recognition and indicated the higher emotional recognition effects of eye area on dynamic images than mouth area. These results suggest that dynamic properties should be considered in emotional study with facial expressions for not only basic emotions but also complex emotions. However, we should consider the properties of emotion because each emotion did not show the effects of dynamic image equally. Furthermore, this study let us know which facial area shows emotional states more correctly is according to the feature emotion.

  • PDF