• Title/Summary/Keyword: facial image

Search Result 823, Processing Time 0.023 seconds

Facial Region Tracking in YCbCr Color Coordinates (YCbCr 컬러 영상 변환을 통한 얼굴 영역 자동 검출)

  • Han, M.H.;Kim, K.S.;Yoon, T.H.;Shin, S.W.;Kim, I.Y.
    • Proceedings of the KIEE Conference
    • /
    • 2005.05a
    • /
    • pp.63-65
    • /
    • 2005
  • In this study, the automatic face tracking algorithm is proposed by using the color and edge information of a color image. To reduce the effects of variations in the illumination conditions, an acquired CCD color image is first transformed into YCbCr color coordinates, and subsequently the morphological image processing operations, and the elliptical geometric measures are applied to extract the refined facial area.

  • PDF

Facial Feature Extraction Based on Private Energy Map in DCT Domain

  • Kim, Ki-Hyun;Chung, Yun-Su;Yoo, Jang-Hee;Ro, Yong-Man
    • ETRI Journal
    • /
    • v.29 no.2
    • /
    • pp.243-245
    • /
    • 2007
  • This letter presents a new feature extraction method based on the private energy map (PEM) technique to utilize the energy characteristics of a facial image. Compared with a non-facial image, a facial image shows large energy congestion in special regions of discrete cosine transform (DCT) coefficients. The PEM is generated by energy probability of the DCT coefficients of facial images. In experiments, higher face recognition performance figures of 100% for the ORL database and 98.8% for the ETRI database have been achieved.

  • PDF

A Study on Detecting Glasses in Facial Image

  • Jung, Sung-Gi;Paik, Doo-Won;Choi, Hyung-Il
    • Journal of the Korea Society of Computer and Information
    • /
    • v.20 no.12
    • /
    • pp.21-28
    • /
    • 2015
  • In this paper, we propose a method of glasses detection in facial image. we develop a detection method of glasses with a weighted sum of the results that detected by facial element detection and glasses frame candidate region. Component of the face detection method detects the glasses, by defining the detection probability of the glasses according to the detection of a face component. Method using the candidate region of the glasses frame detects the glasses, by defining feature of the glasses frame in the candidate region. finally, The results of the combined weight of both methods are obtained. The proposed method in this paper is expected to increase security system's recognition on facial accessories by raising detection performance of glasses or sunglasses for using ATM.

Facial Expression Recognition Method Based on Residual Masking Reconstruction Network

  • Jianing Shen;Hongmei Li
    • Journal of Information Processing Systems
    • /
    • v.19 no.3
    • /
    • pp.323-333
    • /
    • 2023
  • Facial expression recognition can aid in the development of fatigue driving detection, teaching quality evaluation, and other fields. In this study, a facial expression recognition method was proposed with a residual masking reconstruction network as its backbone to achieve more efficient expression recognition and classification. The residual layer was used to acquire and capture the information features of the input image, and the masking layer was used for the weight coefficients corresponding to different information features to achieve accurate and effective image analysis for images of different sizes. To further improve the performance of expression analysis, the loss function of the model is optimized from two aspects, feature dimension and data dimension, to enhance the accurate mapping relationship between facial features and emotional labels. The simulation results show that the ROC of the proposed method was maintained above 0.9995, which can accurately distinguish different expressions. The precision was 75.98%, indicating excellent performance of the facial expression recognition model.

Emotion Recognition and Expression Method using Bi-Modal Sensor Fusion Algorithm (다중 센서 융합 알고리즘을 이용한 감정인식 및 표현기법)

  • Joo, Jong-Tae;Jang, In-Hun;Yang, Hyun-Chang;Sim, Kwee-Bo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.8
    • /
    • pp.754-759
    • /
    • 2007
  • In this paper, we proposed the Bi-Modal Sensor Fusion Algorithm which is the emotional recognition method that be able to classify 4 emotions (Happy, Sad, Angry, Surprise) by using facial image and speech signal together. We extract the feature vectors from speech signal using acoustic feature without language feature and classify emotional pattern using Neural-Network. We also make the feature selection of mouth, eyes and eyebrows from facial image. and extracted feature vectors that apply to Principal Component Analysis(PCA) remakes low dimension feature vector. So we proposed method to fused into result value of emotion recognition by using facial image and speech.

Full face recognition using the feature extracted gy shape analyzing and the back-propagation algorithm (형태분석에 의한 특징 추출과 BP알고리즘을 이용한 정면 얼굴 인식)

  • 최동선;이주신
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.33B no.10
    • /
    • pp.63-71
    • /
    • 1996
  • This paper proposes a method which analyzes facial shape and extracts positions of eyes regardless of the tilt and the size of input iamge. With the extracted feature parameters of facial element by the method, full human faces are recognized by a neural network which BP algorithm is applied on. Input image is changed into binary codes, and then labelled. Area, circumference, and circular degree of the labelled binary image are obtained by using chain code and defined as feature parameters of face image. We first extract two eyes from the similarity and distance of feature parameter of each facial element, and then input face image is corrected by standardizing on two extracted eyes. After a mask is genrated line historgram is applied to finding the feature points of facial elements. Distances and angles between the feature points are used as parameters to recognize full face. To show the validity learning algorithm. We confirmed that the proposed algorithm shows 100% recognition rate on both learned and non-learned data for 20 persons.

  • PDF

Facial Expression Recognition using 1D Transform Features and Hidden Markov Model

  • Jalal, Ahmad;Kamal, Shaharyar;Kim, Daijin
    • Journal of Electrical Engineering and Technology
    • /
    • v.12 no.4
    • /
    • pp.1657-1662
    • /
    • 2017
  • Facial expression recognition systems using video devices have emerged as an important component of natural human-machine interfaces which contribute to various practical applications such as security systems, behavioral science and clinical practices. In this work, we present a new method to analyze, represent and recognize human facial expressions using a sequence of facial images. Under our proposed facial expression recognition framework, the overall procedure includes: accurate face detection to remove background and noise effects from the raw image sequences and align each image using vertex mask generation. Furthermore, these features are reduced by principal component analysis. Finally, these augmented features are trained and tested using Hidden Markov Model (HMM). The experimental evaluation demonstrated the proposed approach over two public datasets such as Cohn-Kanade and AT&T datasets of facial expression videos that achieved expression recognition results as 96.75% and 96.92%. Besides, the recognition results show the superiority of the proposed approach over the state of the art methods.

Facial Image Analysis Algorithm for Emotion Recognition (감정 인식을 위한 얼굴 영상 분석 알고리즘)

  • Joo, Y.H.;Jeong, K.H.;Kim, M.H.;Park, J.B.;Lee, J.;Cho, Y.J.
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.14 no.7
    • /
    • pp.801-806
    • /
    • 2004
  • Although the technology for emotion recognition is important one which demanded in various fields, it still remains as the unsolved problem. Especially, it needs to develop the algorithm based on human facial image. In this paper, we propose the facial image analysis algorithm for emotion recognition. The proposed algorithm is composed as the facial image extraction algorithm and the facial component extraction algorithm. In order to have robust performance under various illumination conditions, the fuzzy color filter is proposed in facial image extraction algorithm. In facial component extraction algorithm, the virtual face model is used to give information for high accuracy analysis. Finally, the simulations are given in order to check and evaluate the performance.

3D Facial Synthesis and Animation for Facial Motion Estimation (얼굴의 움직임 추적에 따른 3차원 얼굴 합성 및 애니메이션)

  • Park, Do-Young;Shim, Youn-Sook;Byun, Hye-Ran
    • Journal of KIISE:Software and Applications
    • /
    • v.27 no.6
    • /
    • pp.618-631
    • /
    • 2000
  • In this paper, we suggest the method of 3D facial synthesis using the motion of 2D facial images. We use the optical flow-based method for estimation of motion. We extract parameterized motion vectors using optical flow between two adjacent image sequences in order to estimate the facial features and the facial motion in 2D image sequences. Then, we combine parameters of the parameterized motion vectors and estimate facial motion information. We use the parameterized vector model according to the facial features. Our motion vector models are eye area, lip-eyebrow area, and face area. Combining 2D facial motion information with 3D facial model action unit, we synthesize the 3D facial model.

  • PDF

Region-Based Facial Expression Recognition in Still Images

  • Nagi, Gawed M.;Rahmat, Rahmita O.K.;Khalid, Fatimah;Taufik, Muhamad
    • Journal of Information Processing Systems
    • /
    • v.9 no.1
    • /
    • pp.173-188
    • /
    • 2013
  • In Facial Expression Recognition Systems (FERS), only particular regions of the face are utilized for discrimination. The areas of the eyes, eyebrows, nose, and mouth are the most important features in any FERS. Applying facial features descriptors such as the local binary pattern (LBP) on such areas results in an effective and efficient FERS. In this paper, we propose an automatic facial expression recognition system. Unlike other systems, it detects and extracts the informative and discriminant regions of the face (i.e., eyes, nose, and mouth areas) using Haar-feature based cascade classifiers and these region-based features are stored into separate image files as a preprocessing step. Then, LBP is applied to these image files for facial texture representation and a feature-vector per subject is obtained by concatenating the resulting LBP histograms of the decomposed region-based features. The one-vs.-rest SVM, which is a popular multi-classification method, is employed with the Radial Basis Function (RBF) for facial expression classification. Experimental results show that this approach yields good performance for both frontal and near-frontal facial images in terms of accuracy and time complexity. Cohn-Kanade and JAFFE, which are benchmark facial expression datasets, are used to evaluate this approach.