• Title/Summary/Keyword: 얼굴정규화

Search Result 114, Processing Time 0.034 seconds

Face Image Illumination Normalization based on Illumination-Separated Eigenface Subspace (조명분리 고유얼굴 부분공간 기반 얼굴 이미지 조명 정규화)

  • Seol, Tae-in;Chung, Sun-Tae;Ki, Sunho;Cho, Seongwon
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2009.05a
    • /
    • pp.179-184
    • /
    • 2009
  • Robust face recognition under various illumination environments is difficult to achieve. For face recognition robust to illumination changes, usually face images are normalized with respect to illumination as a preprocessing step before face recognition. The anisotropic smoothing-based illumination normalization method, known to be one of the best illumination normalization methods, cannot handle casting shadows. In this paper, we present an efficient illumination normalization method for face recognition. The proposed illumination normalization method separates the effect of illumination from eigenfaces and constructs an illumination-separated eigenface subspace. Then, an incoming face image is projected into the subspace and the obtained projected face image is rendered so that illumination effects including casting shadows are reduced as much as possible. Application to real face images shows the proposed illumination normalization method.

  • PDF

Eye Detection Based on Texture Information (텍스처 기반의 눈 검출 기법)

  • Park, Chan-Woo;Park, Hyun;Moon, Young-Shik
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2007.05a
    • /
    • pp.315-318
    • /
    • 2007
  • 자동 얼굴 인식, 표정 인식과 같은 얼굴 영상과 관련된 다양한 연구 분야는 일반적으로 입력 얼굴 영상에 대한 정규화가 필요하다. 사람의 얼굴은 표정, 조명 등에 따라 다양한 형태변화가 있어 입력 영상 마다 정확한 대표 특징 점을 찾는 것은 어려운 문제이다. 특히 감고 있는 눈이나 작은 눈 등은 검출하기 어렵기 때문에 얼굴 관련 연구에서 성능을 저하시키는 주요한 원인이 되고 있다. 이에 다양한 변화에 강건한 눈 검출을 위하여 본 논문에서는 눈의 텍스처 정보를 이용한 눈 검출 방법을 제안한다. 얼굴 영역에서 눈의 텍스처가 갖는 특성을 정의하고 두 가지 형태의 Eye 필터를 정의하였다. 제안된 방법은 Adaboost 기반의 얼굴 영역 검출 단계, 조명 정규화 단계, Eye 필터를 이용한 눈 후보 영역 검출 단계, 눈 위치 점 검출 단계 등 총 4단계로 구성된다. 실험 결과들은 제안된 방법이 얼굴의 자세, 표정, 조명 상태 등에 강건한 검출 결과를 보여주며 감은 눈 영상에서도 강건한 결과를 보여준다.

  • PDF

Normalized Region Extraction of Facial Features by Using Hue-Based Attention Operator (색상기반 주목연산자를 이용한 정규화된 얼굴요소영역 추출)

  • 정의정;김종화;전준형;최흥문
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.6C
    • /
    • pp.815-823
    • /
    • 2004
  • A hue-based attention operator and a combinational integral projection function(CIPF) are proposed to extract the normalized regions of face and facial features robustly against illumination variation. The face candidate regions are efficiently detected by using skin color filter, and the eyes are located accurately nil robustly against illumination variation by applying the proposed hue- and symmetry-based attention operator to the face candidate regions. And the faces are confirmed by verifying the eyes with the color-based eye variance filter. The proposed CIPF, which combines the weighted hue and intensity, is applied to detect the accurate vertical locations of the eyebrows and the mouth under illumination variations and the existence of mustache. The global face and its local feature regions are exactly located and normalized based on these accurate geometrical information. Experimental results on the AR face database[8] show that the proposed eye detection method yields better detection rate by about 39.3% than the conventional gray GST-based method. As a result, the normalized facial features can be extracted robustly and consistently based on the exact eye location under illumination variations.

Facial Feature Extraction using Nasal Masks from 3D Face Image (코 형상 마스크를 이용한 3차원 얼굴 영상의 특징 추출)

  • 김익동;심재창
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.4
    • /
    • pp.1-7
    • /
    • 2004
  • This paper proposes a new method for facial feature extraction, and the method could be used to normalize face images for 3D face recognition. 3D images are much less sensitive than intensity images at a source of illumination, so it is possible to recognize people individually. But input face images may have variable poses such as rotating, Panning, and tilting. If these variances ire not considered, incorrect features could be extracted. And then, face recognition system result in bad matching. So it is necessary to normalize an input image in size and orientation. It is general to use geometrical facial features such as nose, eyes, and mouth in face image normalization steps. In particular, nose is the most prominent feature in 3D face image. So this paper describes a nose feature extraction method using 3D nasal masks that are similar to real nasal shape.

A Study on Appearance-Based Facial Expression Recognition Using Active Shape Model (Active Shape Model을 이용한 외형기반 얼굴표정인식에 관한 연구)

  • Kim, Dong-Ju;Shin, Jeong-Hoon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.1
    • /
    • pp.43-50
    • /
    • 2016
  • This paper introduces an appearance-based facial expression recognition method using ASM landmarks which is used to acquire a detailed face region. In particular, EHMM-based algorithm and SVM classifier with histogram feature are employed to appearance-based facial expression recognition, and performance evaluation of proposed method was performed with CK and JAFFE facial expression database. In addition, performance comparison was achieved through comparison with distance-based face normalization method and a geometric feature-based facial expression approach which employed geometrical features of ASM landmarks and SVM algorithm. As a result, the proposed method using ASM-based face normalization showed performance improvements of 6.39% and 7.98% compared to previous distance-based face normalization method for CK database and JAFFE database, respectively. Also, the proposed method showed higher performance compared to geometric feature-based facial expression approach, and we confirmed an effectiveness of proposed method.

Face recognition rate comparison using Principal Component Analysis in Wavelet compression image (Wavelet 압축 영상에서 PCA를 이용한 얼굴 인식률 비교)

  • 박장한;남궁재찬
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.41 no.5
    • /
    • pp.33-40
    • /
    • 2004
  • In this paper, we constructs face database by using wavelet comparison, and compare face recognition rate by using principle component analysis (Principal Component Analysis : PCA) algorithm. General face recognition method constructs database, and do face recognition by using normalized size. Proposed method changes image of normalized size (92${\times}$112) to 1 step, 2 step, 3 steps to wavelet compression and construct database. Input image did compression by wavelet and a face recognition experiment by PCA algorithm. As well as method that is proposed through an experiment reduces existing face image's information, the processing speed improved. Also, original image of proposed method showed recognition rate about 99.05%, 1 step 99.05%, 2 step 98.93%, 3 steps 98.54%, and showed that is possible to do face recognition constructing face database of large quantity.

Face Recognition under Varying Pose using Local Area obtained by Side-view Pose Normalization (측면 포즈정규화를 통한 부분 영역을 이용한 포즈 변화에 강인한 얼굴 인식)

  • Ahn, Byeong-Doo;Ko, Han-Seok
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.4 s.304
    • /
    • pp.59-68
    • /
    • 2005
  • This paper proposes a face recognition under varying poses using local area obtained by side-view pose normalization. General normalization methods for face recognition under varying pose have a problem with the information about invisible area of face. Generally this problem is solved by compensation, but there are many cases where the image is distorted or features lost due to compensation .To solve this problem we normalize the face pose in side-view to reduce distortion that happens mainly in areas that have large depth variation. We only use undistorted area, removing the area that has been distorted by normalization. We consider two cases of yaw pose variation and pitch pose variation, and by experiments, we confirm the improvement of recognition performance.

A Face Detection using Pupil-Template from Color Base Image (컬러 기반 영상에서 눈동자 템플릿을 이용한 얼굴영상 추출)

  • Choi, Ji-Young;Kim, Mi-Kyung;Cha, Eui-Young
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • v.9 no.1
    • /
    • pp.828-831
    • /
    • 2005
  • In this paper we propose a method to detect human faces from color image using pupil-template matching. Face detection is done by three stages. (i)separating skin regions from non-skin regions; (ii)generating a face regions by application of the best-fit ellipse; (iii)detecting face by pupil-template. Detecting skin regions is based on a skin color model. we generate a gray scale image from original image by the skin model. The gray scale image is segmented to separated skin regions from non-skin regions. Face region is generated by application of the best-fit ellipse is computed on the base of moments. Generated face regions are matched by pupil-template. And we detection face.

  • PDF

SVM Kernel Design Using Local Feature Analysis (지역특징분석을 이용한 SVM 커널 디자인)

  • Lee, Il-Yong;Ahn, Jung-Ho
    • Journal of Digital Contents Society
    • /
    • v.11 no.1
    • /
    • pp.17-24
    • /
    • 2010
  • The purpose of this study is to design and implement a kernel for the support vector machine(SVM) to improve the performance of face recognition. Local feature analysis(LFA) has been well known for its good performance. SVM kernel plays a limited role of mapping low dimensional face features to high dimensional feature space but the proposed kernel using LFA is designed for face recognition purpose. Because of the novel method that local face information is extracted from training set and combined into the kernel, this method is expected to apply to various object recognition/detection tasks. The experimental results shows its improved performance.

Development of Virtual Makeup Tool based on Mobile Augmented Reality

  • Song, Mi-Young;Kim, Young-Sun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.1
    • /
    • pp.127-133
    • /
    • 2021
  • In this study, an augmented reality-based make-up tool was built to analyze the user's face shape based on face-type reference model data and to provide virtual makeup by providing face-type makeup. To analyze the face shape, first recognize the face from the image captured by the camera, then extract the features of the face contour area and use them as analysis properties. Next, the feature points of the extracted face contour area are normalized to compare with the contour area characteristics of each face reference model data. Face shape is predicted and analyzed using the distance difference between the feature points of the normalized contour area and the feature points of the each face-type reference model data. In augmented reality-based virtual makeup, in the image input from the camera, the face is recognized in real time to extract the features of each area of the face. Through the face-type analysis process, you can check the results of virtual makeup by providing makeup that matches the analyzed face shape. Through the proposed system, We expect cosmetics consumers to check the makeup design that suits them and have a convenient and impact on their decision to purchase cosmetics. It will also help you create an attractive self-image by applying facial makeup to your virtual self.