• Title/Summary/Keyword: Facial Feature Extraction

Search Result 160, Processing Time 0.025 seconds

Automatic Extraction of the Facial Feature Points Using Moving Color (색상 움직임을 이용한 얼굴 특징점 자동 추출)

  • Kim, Nam-Ho;Kim, Hyoung-Gon;Ko, Sung-Jea
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.35S no.8
    • /
    • pp.55-67
    • /
    • 1998
  • This paper presents an automatic facial feature point extraction algorithm in sequential color images. To extract facial region in the video sequence, a moving color detection technique is proposed that emphasize moving skin color region by applying motion detection algorithm on the skin-color transformed images. The threshold value for the pixel difference detection is also decided according to the transformed pixel value that represents the probability of the desired color information. Eye candidate regions are selected using both of the black/white color information inside the skin-color region and the valley information of the moving skin region detected using morphological operators. Eye region is finally decided by the geometrical relationship of the eyes and color histogram. To decide the exact feature points, the PCA(Principal Component Analysis) is used on each eye and mouth regions. Experimental results show that the feature points of eye and mouth can be obtained correctly irrespective of background, direction and size of face.

  • PDF

Automatic Face Identification System Using Adaptive Face Region Detection and Facial Feature Vector Classification

  • Kim, Jung-Hoon;Do, Kyeong-Hoon;Lee, Eung-Joo
    • Proceedings of the IEEK Conference
    • /
    • 2002.07b
    • /
    • pp.1252-1255
    • /
    • 2002
  • In this paper, face recognition algorithm, by using skin color information of HSI color coordinate collected from face images, elliptical mask, fratures of face including eyes, nose and mouth, and geometrical feature vectors of face and facial angles, is proposed. The proposed algorithm improved face region extraction efficacy by using HSI information relatively similar to human's visual system along with color tone information about skin colors of face, elliptical mask and intensity information. Moreover, it improved face recognition efficacy with using feature information of eyes, nose and mouth, and Θ1(ACRED), Θ2(AMRED) and Θ 3(ANRED), which are geometrical face angles of face. In the proposed algorithm, it enables exact face reading by using color tone information, elliptical mask, brightness information and structural characteristic angle together, not like using only brightness information in existing algorithm. Moreover, it uses structural related value of characteristics and certain vectors together for the recognition method.

  • PDF

Face Extraction using Genetic Algorithm, Stochastic Variable and Geometrical Model (유전 알고리즘, 통계적 변수, 기하학적 모델에 의한 얼굴 영역 추출)

  • 이상진;홍준표이종실홍승홍
    • Proceedings of the IEEK Conference
    • /
    • 1998.10a
    • /
    • pp.891-894
    • /
    • 1998
  • This paper introduces an automatic face region extraction method. This method consists of two part: face recognition and extraction of facial organs which are eye, eyebrow, nose and mouth. In first stage, we use genetic algorithms(GAs) to get face region in complex background. In second stage, we use Geometrical Face Model to textract eye, eyebrow, nose and mouth. In both stage, stochastic component is used to deal with the problems caused by had lighting condition. According to this value, blurring number is determined. Average Computation time is less than 1 sec, and using this method we can extract facial feature efficiently from several images which has different lightning condition.

  • PDF

Cluster Headache-like Facial Pain following Dental Extraction: A Case Report

  • Byun, Jin-Seok;Jung, Jae-Kwang;Choi, Jae-Kap
    • Journal of Oral Medicine and Pain
    • /
    • v.39 no.3
    • /
    • pp.115-118
    • /
    • 2014
  • A 50-year-old female patient with severe unilateral pain in the right eye, head, and face accompanied by lacrimation and drooping of the right eye and rhinorrhea from the right nose, which developed immediately after extraction of the maxillary right first and second molars, was successfully treated with oral administration of sumatriptan and prednisolone, or verapamile. Although the clinical characteristics are similar to those reported in cluster headache except the temporal feature, the probable cluster headache, the hemicrania continua and the acute migraine headache should be included in the list of differential diagnoses.

Face classification and analysis based on geometrical feature of face (얼굴의 기하학적 특징정보 기반의 얼굴 특징자 분류 및 해석 시스템)

  • Jeong, Kwang-Min;Kim, Jung-Hoon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.16 no.7
    • /
    • pp.1495-1504
    • /
    • 2012
  • This paper proposes an algorithm to classify and analyze facial features such as eyebrow, eye, mouth and chin based on the geometric features of the face. As a preprocessing process to classify and analyze the facial features, the algorithm extracts the facial features such as eyebrow, eye, nose, mouth and chin. From the extracted facial features, it detects the shape and form information and the ratio of distance between the features and formulated them to evaluation functions to classify 12 eyebrows types, 3 eyes types, 9 mouth types and 4 chine types. Using these facial features, it analyzes a face. The face analysis algorithm contains the information about pixel distribution and gradient of each feature. In other words, the algorithm analyzes a face by comparing such information about the features.

A Video Expression Recognition Method Based on Multi-mode Convolution Neural Network and Multiplicative Feature Fusion

  • Ren, Qun
    • Journal of Information Processing Systems
    • /
    • v.17 no.3
    • /
    • pp.556-570
    • /
    • 2021
  • The existing video expression recognition methods mainly focus on the spatial feature extraction of video expression images, but tend to ignore the dynamic features of video sequences. To solve this problem, a multi-mode convolution neural network method is proposed to effectively improve the performance of facial expression recognition in video. Firstly, OpenFace 2.0 is used to detect face images in video, and two deep convolution neural networks are used to extract spatiotemporal expression features. Furthermore, spatial convolution neural network is used to extract the spatial information features of each static expression image, and the dynamic information feature is extracted from the optical flow information of multiple expression images based on temporal convolution neural network. Then, the spatiotemporal features learned by the two deep convolution neural networks are fused by multiplication. Finally, the fused features are input into support vector machine to realize the facial expression classification. Experimental results show that the recognition accuracy of the proposed method can reach 64.57% and 60.89%, respectively on RML and Baum-ls datasets. It is better than that of other contrast methods.

Development of Character Input System using Facial Muscle Signal and Minimum List Keyboard (안면근 신호를 이용한 최소 자판 문자 입력 시스템의 개발)

  • Kim, Hong-Hyun;Park, Hyun-Seok;Kim, Eung-Soo
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2009.10a
    • /
    • pp.289-292
    • /
    • 2009
  • A person does communication between each other using language. But In the case of disabled person can not communication own idea to use writing and gesture. Therefore, In this paper, we embodied communication system using the facial muscle signals so that disabled person can do communication. Especially, After feature extraction of the EEG included facial muscle, it is converted the facial muscle into control signal, and then select character and communicate using a minimum list keyboard.

  • PDF

A Noisy-Robust Approach for Facial Expression Recognition

  • Tong, Ying;Shen, Yuehong;Gao, Bin;Sun, Fenggang;Chen, Rui;Xu, Yefeng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.4
    • /
    • pp.2124-2148
    • /
    • 2017
  • Accurate facial expression recognition (FER) requires reliable signal filtering and the effective feature extraction. Considering these requirements, this paper presents a novel approach for FER which is robust to noise. The main contributions of this work are: First, to preserve texture details in facial expression images and remove image noise, we improved the anisotropic diffusion filter by adjusting the diffusion coefficient according to two factors, namely, the gray value difference between the object and the background and the gradient magnitude of object. The improved filter can effectively distinguish facial muscle deformation and facial noise in face images. Second, to further improve robustness, we propose a new feature descriptor based on a combination of the Histogram of Oriented Gradients with the Canny operator (Canny-HOG) which can represent the precise deformation of eyes, eyebrows and lips for FER. Third, Canny-HOG's block and cell sizes are adjusted to reduce feature dimensionality and make the classifier less prone to overfitting. Our method was tested on images from the JAFFE and CK databases. Experimental results in L-O-Sam-O and L-O-Sub-O modes demonstrated the effectiveness of the proposed method. Meanwhile, the recognition rate of this method is not significantly affected in the presence of Gaussian noise and salt-and-pepper noise conditions.

Facial Features Extraction for Sasang Constitution Classification (사상채질 분류를 위한 안면부내 특징 요소 추출)

  • Bae, Na-Yeong;An, Taek-Won;Jo, Dong-Uk;Lee, Hwa-Seop
    • Journal of Sasang Constitutional Medicine
    • /
    • v.17 no.2
    • /
    • pp.46-51
    • /
    • 2005
  • 1. Objectives The purpose of this study is to objectify the diagnosis of Sasang Constitution. Using the methods of this study, it will improve to classificate Sasang Constitution. 2. Methods 1) Automatic feature extraction of human frontal faces for Sasang Constitution classification. 2) Color feature extraction of human frontal faces (1)Erosion filtering (skin-white, the other-black) (2) Median median 3. Results and Conclusions Observing a person's shape has been the major method for Sasang Constitution classification, which usually has been dependent upon doctor's intuition as of these days. We are developing an automatic system which provides objective basic data for Sasang Constitution classification. For this, in this paper, firstly, the signal processing techniques are applied to automatic feature extraction of human frontal faces for Sasang Constitution classification. The experiment is conducted to verify the effectiveness of the proposed system.

  • PDF

Development of Facial Emotion Recognition System Based on Optimization of HMM Structure by using Harmony Search Algorithm (Harmony Search 알고리즘 기반 HMM 구조 최적화에 의한 얼굴 정서 인식 시스템 개발)

  • Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.3
    • /
    • pp.395-400
    • /
    • 2011
  • In this paper, we propose an study of the facial emotion recognition considering the dynamical variation of emotional state in facial image sequences. The proposed system consists of two main step: facial image based emotional feature extraction and emotional state classification/recognition. At first, we propose a method for extracting and analyzing the emotional feature region using a combination of Active Shape Model (ASM) and Facial Action Units (FAUs). And then, it is proposed that emotional state classification and recognition method based on Hidden Markov Model (HMM) type of dynamic Bayesian network. Also, we adopt a Harmony Search (HS) algorithm based heuristic optimization procedure in a parameter learning of HMM in order to classify the emotional state more accurately. By using all these methods, we construct the emotion recognition system based on variations of the dynamic facial image sequence and make an attempt at improvement of the recognition performance.