• Title/Summary/Keyword: Facial Information

Search Result 1,056, Processing Time 0.035 seconds

Region-Based Facial Expression Recognition in Still Images

  • Nagi, Gawed M.;Rahmat, Rahmita O.K.;Khalid, Fatimah;Taufik, Muhamad
    • Journal of Information Processing Systems
    • /
    • v.9 no.1
    • /
    • pp.173-188
    • /
    • 2013
  • In Facial Expression Recognition Systems (FERS), only particular regions of the face are utilized for discrimination. The areas of the eyes, eyebrows, nose, and mouth are the most important features in any FERS. Applying facial features descriptors such as the local binary pattern (LBP) on such areas results in an effective and efficient FERS. In this paper, we propose an automatic facial expression recognition system. Unlike other systems, it detects and extracts the informative and discriminant regions of the face (i.e., eyes, nose, and mouth areas) using Haar-feature based cascade classifiers and these region-based features are stored into separate image files as a preprocessing step. Then, LBP is applied to these image files for facial texture representation and a feature-vector per subject is obtained by concatenating the resulting LBP histograms of the decomposed region-based features. The one-vs.-rest SVM, which is a popular multi-classification method, is employed with the Radial Basis Function (RBF) for facial expression classification. Experimental results show that this approach yields good performance for both frontal and near-frontal facial images in terms of accuracy and time complexity. Cohn-Kanade and JAFFE, which are benchmark facial expression datasets, are used to evaluate this approach.

A Factor Analysis for the Success of Commercialization of the Facial Extraction and Recognition Image Information System (얼굴추출 및 인식 영상정보 시스템 상용화 성공요인 분석)

  • Kim, Shin-Pyo;Oh, Se-Dong
    • Journal of Industrial Convergence
    • /
    • v.13 no.2
    • /
    • pp.45-54
    • /
    • 2015
  • This Study aims to analyze the factors for the success of commercialization of the facial extraction and recognition image security information system of the domestic companies in Korea. As the results of the analysis, the internal factors for the success of commercialization of the facial extraction and recognition image security information system of the company were found to include (1) Holding of technology for close range facial recognition, (2) Holding of several facial recognition related patents, (3) Preference for the facial recognition security system over the fingerprint recognition and (4) strong volition of the CEO of the corresponding company. On the other hand, the external environmental factors for the success were found to include (1) Extensiveness of the market, (2) Rapid growth of the global facial recognition market, (3) Increased demand for the image security system, (4) Competition in securing of the engine for facial extraction and recognition and (5) Selection by the government as one of the 100 major strategic products.

  • PDF

Detection of Facial Features Using Color and Facial Geometry (색 정보와 기하학적 위치관계를 이용한 얼굴 특징점 검출)

  • 정상현;문인혁
    • Proceedings of the IEEK Conference
    • /
    • 2002.06d
    • /
    • pp.57-60
    • /
    • 2002
  • Facial features are often used for human computer interface(HCI). This paper proposes a method to detect facial features using color and facial geometry information. Face region is first extracted by using color information, and then the pupils are detected by applying a separability filter and facial geometry constraints. Mouth is also extracted from Cr(coded red) component. Experimental results shows that the proposed detection method is robust to a wide range of facial variation in position, scale, color and gaze.

  • PDF

Facial Action Unit Detection with Multilayer Fused Multi-Task and Multi-Label Deep Learning Network

  • He, Jun;Li, Dongliang;Bo, Sun;Yu, Lejun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.11
    • /
    • pp.5546-5559
    • /
    • 2019
  • Facial action units (AUs) have recently drawn increased attention because they can be used to recognize facial expressions. A variety of methods have been designed for frontal-view AU detection, but few have been able to handle multi-view face images. In this paper we propose a method for multi-view facial AU detection using a fused multilayer, multi-task, and multi-label deep learning network. The network can complete two tasks: AU detection and facial view detection. AU detection is a multi-label problem and facial view detection is a single-label problem. A residual network and multilayer fusion are applied to obtain more representative features. Our method is effective and performs well. The F1 score on FERA 2017 is 13.1% higher than the baseline. The facial view recognition accuracy is 0.991. This shows that our multi-task, multi-label model could achieve good performance on the two tasks.

Multiscale Adaptive Local Directional Texture Pattern for Facial Expression Recognition

  • Zhang, Zhengyan;Yan, Jingjie;Lu, Guanming;Li, Haibo;Sun, Ning;Ge, Qi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.9
    • /
    • pp.4549-4566
    • /
    • 2017
  • This work presents a novel facial descriptor, which is named as multiscale adaptive local directional texture pattern (MALDTP) and employed for expression recognition. We apply an adaptive threshold value to encode facial image in different scales, and concatenate a series of histograms based on the MALDTP to generate facial descriptor in term of Gabor filters. In addition, some dedicated experiments were conducted to evaluate the performance of the MALDTP method in a person-independent way. The experimental results demonstrate that our proposed method achieves higher recognition rate than local directional texture pattern (LDTP). Moreover, the MALDTP method has lower computational complexity, fewer storage space and higher classification accuracy than local Gabor binary pattern histogram sequence (LGBPHS) method. In a nutshell, the proposed MALDTP method can not only avoid choosing the threshold by experience but also contain much more structural and contrast information of facial image than LDTP.

Fake News Detection on Social Media using Video Information: Focused on YouTube (영상정보를 활용한 소셜 미디어상에서의 가짜 뉴스 탐지: 유튜브를 중심으로)

  • Chang, Yoon Ho;Choi, Byoung Gu
    • The Journal of Information Systems
    • /
    • v.32 no.2
    • /
    • pp.87-108
    • /
    • 2023
  • Purpose The main purpose of this study is to improve fake news detection performance by using video information to overcome the limitations of extant text- and image-oriented studies that do not reflect the latest news consumption trend. Design/methodology/approach This study collected video clips and related information including news scripts, speakers' facial expression, and video metadata from YouTube to develop fake news detection model. Based on the collected data, seven combinations of related information (i.e. scripts, video metadata, facial expression, scripts and video metadata, scripts and facial expression, and scripts, video metadata, and facial expression) were used as an input for taining and evaluation. The input data was analyzed using six models such as support vector machine and deep neural network. The area under the curve(AUC) was used to evaluate the performance of classification model. Findings The results showed that the ACU and accuracy values of three features combination (scripts, video metadata, and facial expression) were the highest in logistic regression, naïve bayes, and deep neural network models. This result implied that the fake news detection could be improved by using video information(video metadata and facial expression). Sample size of this study was relatively small. The generalizablity of the results would be enhanced with a larger sample size.

3D Facial Synthesis and Animation for Facial Motion Estimation (얼굴의 움직임 추적에 따른 3차원 얼굴 합성 및 애니메이션)

  • Park, Do-Young;Shim, Youn-Sook;Byun, Hye-Ran
    • Journal of KIISE:Software and Applications
    • /
    • v.27 no.6
    • /
    • pp.618-631
    • /
    • 2000
  • In this paper, we suggest the method of 3D facial synthesis using the motion of 2D facial images. We use the optical flow-based method for estimation of motion. We extract parameterized motion vectors using optical flow between two adjacent image sequences in order to estimate the facial features and the facial motion in 2D image sequences. Then, we combine parameters of the parameterized motion vectors and estimate facial motion information. We use the parameterized vector model according to the facial features. Our motion vector models are eye area, lip-eyebrow area, and face area. Combining 2D facial motion information with 3D facial model action unit, we synthesize the 3D facial model.

  • PDF

The analysis of relationships between facial impressions and physical features (얼굴 인상과 물리적 특징의 관계 구조 분석)

  • 김효선;한재현
    • Korean Journal of Cognitive Science
    • /
    • v.14 no.4
    • /
    • pp.53-63
    • /
    • 2003
  • We analyzed the relationships between facial impressions and physical features, and investigated the effects of impressions on facial similarity judgments. Using 79 faces extracted from a face database, we collected the ratings of impressions along four dimensions -mild-fierce, bright-dull, feminine-manly and youthful-mature- and the measures of 41 physical features. Multiple Regression Analyses showed that the ratings of impressions and the measures of features are closely connected with each other. Our experiments using facial similarity judgments confirmed the possibility that facial impressions are used in processing of facial information. We found that people tend to perceive faces as similar when they have the same impressions rather than neutral ones, although all of them are alike physically. These results imply that facial impressions are used as a psychological structure representing facial appearance, and that facial processing includes impression information.

  • PDF

Facial Behavior Recognition for Driver's Fatigue Detection (운전자 피로 감지를 위한 얼굴 동작 인식)

  • Park, Ho-Sik;Bae, Cheol-Soo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.35 no.9C
    • /
    • pp.756-760
    • /
    • 2010
  • This paper is proposed to an novel facial behavior recognition system for driver's fatigue detection. Facial behavior is shown in various facial feature such as head expression, head pose, gaze, wrinkles. But it is very difficult to clearly discriminate a certain behavior by the obtained facial feature. Because, the behavior of a person is complicated and the face representing behavior is vague in providing enough information. The proposed system for facial behavior recognition first performs detection facial feature such as eye tracking, facial feature tracking, furrow detection, head orientation estimation, head motion detection and indicates the obtained feature by AU of FACS. On the basis of the obtained AU, it infers probability each state occur through Bayesian network.

Reconstruction of High-Resolution Facial Image Based on A Recursive Error Back-Projection

  • Park, Joeng-Seon;Lee, Seong-Whan
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2004.04b
    • /
    • pp.715-717
    • /
    • 2004
  • This paper proposes a new reconstruction method of high-resolution facial image from a low-resolution facial image based on a recursive error back-projection of top-down machine learning. A face is represented by a linear combination of prototypes of shape and texture. With the shape and texture information about the pixels in a given low-resolution facial image, we can estimate optimal coefficients for a linear combination of prototypes of shape and those of texture by solving least square minimization. Then high-resolution facial image can be obtained by using the optimal coefficients for linear combination of the high-resolution prototypes, In addition to, a recursive error back-projection is applied to improve the accuracy of synthesized high-resolution facial image. The encouraging results of the proposed method show that our method can be used to improve the performance of the face recognition by applying our method to reconstruct high-resolution facial images from low-resolution one captured at a distance.

  • PDF