• 제목/요약/키워드: facial information processing

Search Result 219, Processing Time 0.029 seconds

The analysis of relationships between facial impressions and physical features (얼굴 인상과 물리적 특징의 관계 구조 분석)

  • 김효선;한재현
    • Korean Journal of Cognitive Science
    • /
    • v.14 no.4
    • /
    • pp.53-63
    • /
    • 2003
  • We analyzed the relationships between facial impressions and physical features, and investigated the effects of impressions on facial similarity judgments. Using 79 faces extracted from a face database, we collected the ratings of impressions along four dimensions -mild-fierce, bright-dull, feminine-manly and youthful-mature- and the measures of 41 physical features. Multiple Regression Analyses showed that the ratings of impressions and the measures of features are closely connected with each other. Our experiments using facial similarity judgments confirmed the possibility that facial impressions are used in processing of facial information. We found that people tend to perceive faces as similar when they have the same impressions rather than neutral ones, although all of them are alike physically. These results imply that facial impressions are used as a psychological structure representing facial appearance, and that facial processing includes impression information.

  • PDF

Region-Based Facial Expression Recognition in Still Images

  • Nagi, Gawed M.;Rahmat, Rahmita O.K.;Khalid, Fatimah;Taufik, Muhamad
    • Journal of Information Processing Systems
    • /
    • v.9 no.1
    • /
    • pp.173-188
    • /
    • 2013
  • In Facial Expression Recognition Systems (FERS), only particular regions of the face are utilized for discrimination. The areas of the eyes, eyebrows, nose, and mouth are the most important features in any FERS. Applying facial features descriptors such as the local binary pattern (LBP) on such areas results in an effective and efficient FERS. In this paper, we propose an automatic facial expression recognition system. Unlike other systems, it detects and extracts the informative and discriminant regions of the face (i.e., eyes, nose, and mouth areas) using Haar-feature based cascade classifiers and these region-based features are stored into separate image files as a preprocessing step. Then, LBP is applied to these image files for facial texture representation and a feature-vector per subject is obtained by concatenating the resulting LBP histograms of the decomposed region-based features. The one-vs.-rest SVM, which is a popular multi-classification method, is employed with the Radial Basis Function (RBF) for facial expression classification. Experimental results show that this approach yields good performance for both frontal and near-frontal facial images in terms of accuracy and time complexity. Cohn-Kanade and JAFFE, which are benchmark facial expression datasets, are used to evaluate this approach.

Personalized Facial Expression Recognition System using Fuzzy Neural Networks and robust Image Processing (퍼지 신경망과 강인한 영상 처리를 이용한 개인화 얼굴 표정 인식 시스템)

  • 김대진;김종성;변증남
    • Proceedings of the IEEK Conference
    • /
    • 2002.06c
    • /
    • pp.25-28
    • /
    • 2002
  • This paper introduce a personalized facial expression recognition system. Many previous works on facial expression recognition system focus on the formal six universal facial expressions. However, it is very difficult to make such expressions for normal person without much effort and training. And in these days, the personalized service is also mainly focused by many researchers in various fields. Thus, we Propose a novel facial expression recognition system with fuzzy neural networks and robust image processing.

  • PDF

Recognition of Human Facial Expression in a Video Image using the Active Appearance Model

  • Jo, Gyeong-Sic;Kim, Yong-Guk
    • Journal of Information Processing Systems
    • /
    • v.6 no.2
    • /
    • pp.261-268
    • /
    • 2010
  • Tracking human facial expression within a video image has many useful applications, such as surveillance and teleconferencing, etc. Initially, the Active Appearance Model (AAM) was proposed for facial recognition; however, it turns out that the AAM has many advantages as regards continuous facial expression recognition. We have implemented a continuous facial expression recognition system using the AAM. In this study, we adopt an independent AAM using the Inverse Compositional Image Alignment method. The system was evaluated using the standard Cohn-Kanade facial expression database, the results of which show that it could have numerous potential applications.

Facial Expression Recognition Method Based on Residual Masking Reconstruction Network

  • Jianing Shen;Hongmei Li
    • Journal of Information Processing Systems
    • /
    • v.19 no.3
    • /
    • pp.323-333
    • /
    • 2023
  • Facial expression recognition can aid in the development of fatigue driving detection, teaching quality evaluation, and other fields. In this study, a facial expression recognition method was proposed with a residual masking reconstruction network as its backbone to achieve more efficient expression recognition and classification. The residual layer was used to acquire and capture the information features of the input image, and the masking layer was used for the weight coefficients corresponding to different information features to achieve accurate and effective image analysis for images of different sizes. To further improve the performance of expression analysis, the loss function of the model is optimized from two aspects, feature dimension and data dimension, to enhance the accurate mapping relationship between facial features and emotional labels. The simulation results show that the ROC of the proposed method was maintained above 0.9995, which can accurately distinguish different expressions. The precision was 75.98%, indicating excellent performance of the facial expression recognition model.

Video Expression Recognition Method Based on Spatiotemporal Recurrent Neural Network and Feature Fusion

  • Zhou, Xuan
    • Journal of Information Processing Systems
    • /
    • v.17 no.2
    • /
    • pp.337-351
    • /
    • 2021
  • Automatically recognizing facial expressions in video sequences is a challenging task because there is little direct correlation between facial features and subjective emotions in video. To overcome the problem, a video facial expression recognition method using spatiotemporal recurrent neural network and feature fusion is proposed. Firstly, the video is preprocessed. Then, the double-layer cascade structure is used to detect a face in a video image. In addition, two deep convolutional neural networks are used to extract the time-domain and airspace facial features in the video. The spatial convolutional neural network is used to extract the spatial information features from each frame of the static expression images in the video. The temporal convolutional neural network is used to extract the dynamic information features from the optical flow information from multiple frames of expression images in the video. A multiplication fusion is performed with the spatiotemporal features learned by the two deep convolutional neural networks. Finally, the fused features are input to the support vector machine to realize the facial expression classification task. The experimental results on cNTERFACE, RML, and AFEW6.0 datasets show that the recognition rates obtained by the proposed method are as high as 88.67%, 70.32%, and 63.84%, respectively. Comparative experiments show that the proposed method obtains higher recognition accuracy than other recently reported methods.

Emotion Detection Algorithm Using Frontal Face Image

  • Kim, Moon-Hwan;Joo, Young-Hoon;Park, Jin-Bae
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.2373-2378
    • /
    • 2005
  • An emotion detection algorithm using frontal facial image is presented in this paper. The algorithm is composed of three main stages: image processing stage and facial feature extraction stage, and emotion detection stage. In image processing stage, the face region and facial component is extracted by using fuzzy color filter, virtual face model, and histogram analysis method. The features for emotion detection are extracted from facial component in facial feature extraction stage. In emotion detection stage, the fuzzy classifier is adopted to recognize emotion from extracted features. It is shown by experiment results that the proposed algorithm can detect emotion well.

  • PDF

Facial Feature Extraction in Reduced Image using Generalized Symmetry Transform (일반화 대칭 변환을 이용한 축소 영상에서의 얼굴특징추출)

  • Paeng, Young-Hye;Jung, Sung-Hwan
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.2
    • /
    • pp.569-576
    • /
    • 2000
  • The GST can extract the position of facial features without a prior information in an image. However, this method requires a plenty of the processing time because the mask size to process GST must be larger than the size of object such as eye, mouth and nose in an image. In addition, it has the complexity for the computation of middle line to decide facial features. In this paper, we proposed two methods to overcome these disadvantage of the conventional method. First, we used the reduced image having enough information instead of an original image to decrease the processing time. Second, we used the extracted peak positions instead of the complex statistical processing to get the middle lines. To analyze the performance of the proposed method, we tested 200 images including, the front, rotated, spectacled, and mustached facial images. In result, the proposed method shows 85% in the performance of feature extraction and can reduce the processing time over 53 times, compared with existing method.

  • PDF

Local and Global Attention Fusion Network For Facial Emotion Recognition (얼굴 감정 인식을 위한 로컬 및 글로벌 어텐션 퓨전 네트워크)

  • Minh-Hai Tran;Tram-Tran Nguyen Quynh;Nhu-Tai Do;Soo-Hyung Kim
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.05a
    • /
    • pp.493-495
    • /
    • 2023
  • Deep learning methods and attention mechanisms have been incorporated to improve facial emotion recognition, which has recently attracted much attention. The fusion approaches have improved accuracy by combining various types of information. This research proposes a fusion network with self-attention and local attention mechanisms. It uses a multi-layer perceptron network. The network extracts distinguishing characteristics from facial images using pre-trained models on RAF-DB dataset. We outperform the other fusion methods on RAD-DB dataset with impressive results.

Detection of Facial Direction for Automatic Image Arrangement (이미지 자동배치를 위한 얼굴 방향성 검출)

  • 동지연;박지숙;이환용
    • Journal of Information Technology Applications and Management
    • /
    • v.10 no.4
    • /
    • pp.135-147
    • /
    • 2003
  • With the development of multimedia and optical technologies, application systems with facial features hare been increased the interests of researchers, recently. The previous research efforts in face processing mainly use the frontal images in order to recognize human face visually and to extract the facial expression. However, applications, such as image database systems which support queries based on the facial direction and image arrangement systems which place facial images automatically on digital albums, deal with the directional characteristics of a face. In this paper, we propose a method to detect facial directions by using facial features. In the proposed method, the facial trapezoid is defined by detecting points for eyes and a lower lip. Then, the facial direction formula, which calculates the right and left facial direction, is defined by the statistical data about the ratio of the right and left area in facial trapezoids. The proposed method can give an accurate estimate of horizontal rotation of a face within an error tolerance of $\pm1.31$ degree and takes an average execution time of 3.16 sec.

  • PDF