• Title/Summary/Keyword: Facial feature Extraction

Search Result 157, Processing Time 0.037 seconds

Feature Extraction of Face and Face Elements Using Projection and Correction of Incline (투영과 기울기 보정을 이용한 얼굴 및 얼굴 요소의 특징 추출)

  • 김진태;김동욱;오정수
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.7 no.3
    • /
    • pp.499-505
    • /
    • 2003
  • This paper proposes methods to extract face elements and facial characteristics points for face recognition. We select a candidate region of the face elements with geometrical information between them inside the extracted face region with skin color and extract them using their inherent features. The facial characteristics to be applied to face recognition is expressed with geometrical relation such as distance and angle between the extracted face elements. Experiment results shows good performance to extract of face elements.

A study of face detection using color component (색상요소를 고려한 얼굴검출에 대한 연구)

  • 이정하;강진석;최연성;김장형
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2002.11a
    • /
    • pp.240-243
    • /
    • 2002
  • In this paper, we propose a face region detection based on skin-color distribution and facial feature extraction algorithm in color still images. To extract face region, we transform color using general skin-color distribution. Facial features are extracted by edge transformation. This detection process reduces calculation time by a scale-down scanning from segmented region. we can detect face region in various facial Expression, skin-color deference and tilted face images.

  • PDF

A Study on the Feature Point Extraction and Image Synthesis in the 3-D Model Based Image Transmission System (3차원 모델 기반 영상전송 시스템에서의 특징점 추출과 영상합성 연구)

  • 배문관;김동호;정성환;김남철;배건성
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.17 no.7
    • /
    • pp.767-778
    • /
    • 1992
  • Is discussed. A method to extract feature points and to synthesize human facial images In 3-Dmodel-based ceding system, faciai feature points are extracted automatically using some image processing techniques and the known knowledge for human face. A wire frame model matched to human face Is transformed according to the motion of point using the extracted feature points. The synthesized Image Is produced by mapping the texture of initial front view Image onto the trarnsformed wire frame. Experinent results show that the synthesitzed image appears with little unnaturalness.

  • PDF

Face Recognition Using Feature Information and Neural Network

  • Chung, Jae-Mo;Bae, Hyeon;Kim, Sung-Shin
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.55.2-55
    • /
    • 2001
  • The statistical analysis of the feature extraction and the neural networks are proposed to recognize a human face. In the preprocessing step, the normalized skin color map with Gaussian functions is employed to extract the region efface candidate. The feature information in the region of face candidate is used to detect a face region. In the recognition step, as a tested, the 360 images of 30 persons are trained by the backpropagation algorithm. The images of each person are obtained from the various direction, pose, and facial expression, Input variables of the neural networks are the feature information that comes from the eigenface spaces. The simulation results of 30 persons show that the proposed method yields high recognition rates.

  • PDF

Facial Feature Extraction Using Energy Probability in Frequency Domain (주파수 영역에서 에너지 확률을 이용한 얼굴 특징 추출)

  • Choi Jean;Chung Yns-Su;Kim Ki-Hyun;Yoo Jang-Hee
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.4 s.310
    • /
    • pp.87-95
    • /
    • 2006
  • In this paper, we propose a novel feature extraction method for face recognition, based on Discrete Cosine Transform (DCT), Energy Probability (EP), and Linear Discriminant Analysis (LDA). We define an energy probability as magnitude of effective information and it is used to create a frequency mask in OCT domain. The feature extraction method consists of three steps; i) the spatial domain of face images is transformed into the frequency domain called OCT domain; ii) energy property is applied on DCT domain that acquire from face image for the purpose of dimension reduction of data and optimization of valid information; iii) in order to obtain the most significant and invariant feature of face images, LDA is applied to the data extracted using frequency mask. In experiments, the recognition rate is 96.8% in ETRI database and 100% in ORL database. The proposed method has been shown improvements on the dimension reduction of feature space and the face recognition over the previously proposed methods.

Vector-based Face Generation using Montage and Shading Method (몽타주 기법과 음영합성 기법을 이용한 벡터기반 얼굴 생성)

  • 박연출;오해석
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.6
    • /
    • pp.817-828
    • /
    • 2004
  • In this paper, we propose vector-based face generation system that uses montage and shading method and preserves designer(artist)'s style. Proposed system generates character's face similar to human face automatically using facial features that extracted from a photograph. In addition, unlike previous face generation system that uses contours, we propose the system is based on color and composes face from facial features and shade extracted from a photograph. Thus, it has advantages that can make more realistic face similar to human face. Since this system is vector-based, the generated character's face has no size limit and constraint. Therefore it is available to transform the shape freely and to apply various facial expressions to 2D face. Moreover, it has distinctiveness with another approaches in point that can keep artist's impression just as it is in result.

Development of Character Input System using Facial Muscle Signal and Minimum List Keyboard (안면근 신호를 이용한 최소 자판 문자 입력 시스템의 개발)

  • Kim, Hong-Hyun;Kim, Eung-Soo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.14 no.6
    • /
    • pp.1338-1344
    • /
    • 2010
  • A person does communication between each other using language. But In the case of disabled person can not communication own idea to use writing and gesture. Therefore, In this paper, we embodied communication system using the facial muscle signals so that disabled person can do communication. Especially, After feature extraction of the EEG included facial muscle, it is converted the facial muscle into control signal, and then select character and communication using a minimum list keyboard.

Person-Independent Facial Expression Recognition with Histograms of Prominent Edge Directions

  • Makhmudkhujaev, Farkhod;Iqbal, Md Tauhid Bin;Arefin, Md Rifat;Ryu, Byungyong;Chae, Oksam
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.12
    • /
    • pp.6000-6017
    • /
    • 2018
  • This paper presents a new descriptor, named Histograms of Prominent Edge Directions (HPED), for the recognition of facial expressions in a person-independent environment. In this paper, we raise the issue of sampling error in generating the code-histogram from spatial regions of the face image, as observed in the existing descriptors. HPED describes facial appearance changes based on the statistical distribution of the top two prominent edge directions (i.e., primary and secondary direction) captured over small spatial regions of the face. Compared to existing descriptors, HPED uses a smaller number of code-bins to describe the spatial regions, which helps avoid sampling error despite having fewer samples while preserving the valuable spatial information. In contrast to the existing Histogram of Oriented Gradients (HOG) that uses the histogram of the primary edge direction (i.e., gradient orientation) only, we additionally consider the histogram of the secondary edge direction, which provides more meaningful shape information related to the local texture. Experiments on popular facial expression datasets demonstrate the superior performance of the proposed HPED against existing descriptors in a person-independent environment.

Implementation of Drowsiness Driving Warning System based on Improved Eyes Detection and Pupil Tracking Using Facial Feature Information (얼굴 특징 정보를 이용한 향상된 눈동자 추적을 통한 졸음운전 경보 시스템 구현)

  • Jeong, Do Yeong;Hong, KiCheon
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.5 no.2
    • /
    • pp.167-176
    • /
    • 2009
  • In this paper, a system that detects driver's drowsiness has been implemented based on the automatic extraction and the tracking of pupils. The research also focuses on the compensation of illumination and reduction of background noises that naturally exist in the driving condition. The system, that is based on the principle of Haar-like feature, automatically collects data from areas of driver's face and eyes among the complex background. Then, it makes decision of driver's drowsiness by using recognition of characteristics of pupils area, detection of pupils, and their movements. The implemented system has been evaluated and verified the practical uses for the prevention of driver's drowsiness.

Normalized Region Extraction of Facial Features by Using Hue-Based Attention Operator (색상기반 주목연산자를 이용한 정규화된 얼굴요소영역 추출)

  • 정의정;김종화;전준형;최흥문
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.6C
    • /
    • pp.815-823
    • /
    • 2004
  • A hue-based attention operator and a combinational integral projection function(CIPF) are proposed to extract the normalized regions of face and facial features robustly against illumination variation. The face candidate regions are efficiently detected by using skin color filter, and the eyes are located accurately nil robustly against illumination variation by applying the proposed hue- and symmetry-based attention operator to the face candidate regions. And the faces are confirmed by verifying the eyes with the color-based eye variance filter. The proposed CIPF, which combines the weighted hue and intensity, is applied to detect the accurate vertical locations of the eyebrows and the mouth under illumination variations and the existence of mustache. The global face and its local feature regions are exactly located and normalized based on these accurate geometrical information. Experimental results on the AR face database[8] show that the proposed eye detection method yields better detection rate by about 39.3% than the conventional gray GST-based method. As a result, the normalized facial features can be extracted robustly and consistently based on the exact eye location under illumination variations.