• Title/Summary/Keyword: 고유얼굴

Search Result 139, Processing Time 0.028 seconds

Facial Features and Motion Recovery using multi-modal information and Paraperspective Camera Model (다양한 형식의 얼굴정보와 준원근 카메라 모델해석을 이용한 얼굴 특징점 및 움직임 복원)

  • Kim, Sang-Hoon
    • The KIPS Transactions:PartB
    • /
    • v.9B no.5
    • /
    • pp.563-570
    • /
    • 2002
  • Robust extraction of 3D facial features and global motion information from 2D image sequence for the MPEG-4 SNHC face model encoding is described. The facial regions are detected from image sequence using multi-modal fusion technique that combines range, color and motion information. 23 facial features among the MPEG-4 FDP (Face Definition Parameters) are extracted automatically inside the facial region using color transform (GSCD, BWCD) and morphological processing. The extracted facial features are used to recover the 3D shape and global motion of the object using paraperspective camera model and SVD (Singular Value Decomposition) factorization method. A 3D synthetic object is designed and tested to show the performance of proposed algorithm. The recovered 3D motion information is transformed into global motion parameters of FAP (Face Animation Parameters) of the MPEG-4 to synchronize a generic face model with a real face.

Face Recognition by Combining Linear Discriminant Analysis and Radial Basis Function Network Classifiers (선형판별법과 레이디얼 기저함수 신경망 결합에 의한 얼굴인식)

  • Oh Byung-Joo
    • The Journal of the Korea Contents Association
    • /
    • v.5 no.6
    • /
    • pp.41-48
    • /
    • 2005
  • This paper presents a face recognition method based on the combination of well-known statistical representations of Principal Component Analysis(PCA), and Linear Discriminant Analysis(LDA) with Radial Basis Function Networks. The original face image is first processed by PCA to reduce the dimension, and thereby avoid the singularity of the within-class scatter matrix in LDA calculation. The result of PCA process is applied to LDA classifier. In the second approach, the LDA process Produce a discriminational features of the face image, which is taken as the input of the Radial Basis Function Network(RBFN). The proposed approaches has been tested on the ORL face database. The experimental results have been demonstrated, and the recognition rate of more than 93.5% has been achieved.

  • PDF

Illumination Robust Face Recognition using Ridge Regressive Bilinear Models (Ridge Regressive Bilinear Model을 이용한 조명 변화에 강인한 얼굴 인식)

  • Shin, Dong-Su;Kim, Dai-Jin;Bang, Sung-Yang
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.1
    • /
    • pp.70-78
    • /
    • 2007
  • The performance of face recognition is greatly affected by the illumination effect because intra-person variation under different lighting conditions can be much bigger than the inter-person variation. In this paper, we propose an illumination robust face recognition by separating identity factor and illumination factor using the symmetric bilinear models. The translation procedure in the bilinear model requires a repetitive computation of matrix inverse operation to reach the identity and illumination factors. Sometimes, this computation may result in a nonconvergent case when the observation has an noisy information. To alleviate this situation, we suggest a ridge regressive bilinear model that combines the ridge regression into the bilinear model. This combination provides some advantages: it makes the bilinear model more stable by shrinking the range of identity and illumination factors appropriately, and it improves the recognition performance by reducing the insignificant factors effectively. Experiment results show that the ridge regressive bilinear model outperforms significantly other existing methods such as the eigenface, quotient image, and the bilinear model in terms of the recognition rate under a variety of illuminations.

Realtime Facial Expression Data Tracking System using Color Information (컬러 정보를 이용한 실시간 표정 데이터 추적 시스템)

  • Lee, Yun-Jung;Kim, Young-Bong
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.7
    • /
    • pp.159-170
    • /
    • 2009
  • It is very important to extract the expression data and capture a face image from a video for online-based 3D face animation. In recently, there are many researches on vision-based approach that captures the expression of an actor in a video and applies them to 3D face model. In this paper, we propose an automatic data extraction system, which extracts and traces a face and expression data from realtime video inputs. The procedures of our system consist of three steps: face detection, face feature extraction, and face tracing. In face detection, we detect skin pixels using YCbCr skin color model and verifies the face area using Haar-based classifier. We use the brightness and color information for extracting the eyes and lips data related facial expression. We extract 10 feature points from eyes and lips area considering FAP defined in MPEG-4. Then, we trace the displacement of the extracted features from continuous frames using color probabilistic distribution model. The experiments showed that our system could trace the expression data to about 8fps.

Pre-processing Method for Face Recognition Robust to Lightness Variation; Facial Symmetry (조명 변화에 강건한 얼굴 인식의 전처리 기법; 얼굴의 대칭성)

  • Kwon Heak-Bong;Kim Young-Gil;Chang Un-Dong;Song Young-Jun
    • The Journal of the Korea Contents Association
    • /
    • v.4 no.4
    • /
    • pp.163-169
    • /
    • 2004
  • In this paper. we propose a shaded recognition method using symmetric feature. When the existing PCA is applied to shaded face images, the recognition rate is decreased. To improve the recognition rate, we use facial symmetry. If the difference of light and shade is greater than a threshold value, we make a mirror image by replacing the dark side with the bright side symmetrically Then the mirror image is compared with a query image. We compare the performance of the proposed algorithm with the existing algorithms such as PCA, PCA without three eigenfaces and histogram equalization methods. The recognition rate of our method shows $98.889\%$ with the excellent result.

  • PDF

Recognizing Facial Expression Using 1-order Moment and Principal Component Analysis (1차 모멘트와 주요성분분석을 이용한 얼굴표정 인식)

  • Cho Yong-Hyun;Hong Seung-Jun
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2006.05a
    • /
    • pp.405-408
    • /
    • 2006
  • 본 논문에서는 영상의 1차 모멘트와 주요성분분석을 이용한 효율적인 얼굴표정 인식방법을 제안하였다. 여기서 1차 모멘트는 영상의 중심이동을 위한 전처리 과정으로 인식에 불필요한 배경의 배제와 계산시간의 감소로 인식성능을 개선하기 위함이다. 또한 주요성분분석은 얼굴표정의 특징인 고유영상을 추출하는 것으로, 이는 2차의 통계성을 고려한 중복신호의 제거로 인식성능을 개선하기 위함이다. 제안된 방법을 각각 320*243 픽셀의 48개(4명*6장*2그룹) 얼굴표정을 대상으로 Euclidean 분류척도를 이용하여 실험한 결과 전처리를 수행하지 않는 기존 방법보다 우수한 인식성능이 있음을 확인하였다.

  • PDF

3-D Facial Animation on the PDA via Automatic Facial Expression Recognition (얼굴 표정의 자동 인식을 통한 PDA 상에서의 3차원 얼굴 애니메이션)

  • Lee Don-Soo;Choi Soo-Mi;Kim Hae-Hwang;Kim Yong-Guk
    • The KIPS Transactions:PartB
    • /
    • v.12B no.7 s.103
    • /
    • pp.795-802
    • /
    • 2005
  • In this paper, we present a facial expression recognition-synthesis system that recognizes 7 basic emotion information automatically and renders face with non-photorelistic style in PDA For the recognition of the facial expressions, first we need to detect the face area within the image acquired from the camera. Then, a normalization procedure is applied to it for geometrical and illumination corrections. To classify a facial expression, we have found that when Gabor wavelets is combined with enhanced Fisher model the best result comes out. In our case, the out put is the 7 emotional weighting. Such weighting information transmitted to the PDA via a mobile network, is used for non-photorealistic facial expression animation. To render a 3-D avatar which has unique facial character, we adopted the cartoon-like shading method. We found that facial expression animation using emotional curves is more effective in expressing the timing of an expression comparing to the linear interpolation method.

A Passport Recognition and face Verification Using Enhanced fuzzy ART Based RBF Network and PCA Algorithm (개선된 퍼지 ART 기반 RBF 네트워크와 PCA 알고리즘을 이용한 여권 인식 및 얼굴 인증)

  • Kim Kwang-Baek
    • Journal of Intelligence and Information Systems
    • /
    • v.12 no.1
    • /
    • pp.17-31
    • /
    • 2006
  • In this paper, passport recognition and face verification methods which can automatically recognize passport codes and discriminate forgery passports to improve efficiency and systematic control of immigration management are proposed. Adjusting the slant is very important for recognition of characters and face verification since slanted passport images can bring various unwanted effects to the recognition of individual codes and faces. Therefore, after smearing the passport image, the longest extracted string of characters is selected. The angle adjustment can be conducted by using the slant of the straight and horizontal line that connects the center of thickness between left and right parts of the string. Extracting passport codes is done by Sobel operator, horizontal smearing, and 8-neighborhood contour tracking algorithm. The string of codes can be transformed into binary format by applying repeating binary method to the area of the extracted passport code strings. The string codes are restored by applying CDM mask to the binary string area and individual codes are extracted by 8-neighborhood contour tracking algerian. The proposed RBF network is applied to the middle layer of RBF network by using the fuzzy logic connection operator and proposing the enhanced fuzzy ART algorithm that dynamically controls the vigilance parameter. The face is authenticated by measuring the similarity between the feature vector of the facial image from the passport and feature vector of the facial image from the database that is constructed with PCA algorithm. After several tests using a forged passport and the passport with slanted images, the proposed method was proven to be effective in recognizing passport codes and verifying facial images.

  • PDF

Design of ASM-based Face Recognition System Using (2D)2 Hybird Preprocessing Algorithm (ASM기반 (2D)2 하이브리드 전처리 알고리즘을 이용한 얼굴인식 시스템 설계)

  • Kim, Hyun-Ki;Jin, Yong-Tak;Oh, Sung-Kwun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.24 no.2
    • /
    • pp.173-178
    • /
    • 2014
  • In this study, we introduce ASM-based face recognition classifier and its design methodology with the aid of 2-dimensional 2-directional hybird preprocessing algorithm. Since the image of face recognition is easily affected by external environments, ASM(active shape model) as image preprocessing algorithm is used to resolve such problem. In particular, ASM is used widely for the purpose of feature extraction for human face. After extracting face image area by using ASM, the dimensionality of the extracted face image data is reduced by using $(2D)^2$hybrid preprocessing algorithm based on LDA and PCA. Face image data through preprocessing algorithm is used as input data for the design of the proposed polynomials based radial basis function neural network. Unlike as the case in existing neural networks, the proposed pattern classifier has the characteristics of a robust neural network and it is also superior from the view point of predictive ability as well as ability to resolve the problem of multi-dimensionality. The essential design parameters (the number of row eigenvectors, column eigenvectors, and clusters, and fuzzification coefficient) of the classifier are optimized by means of ABC(artificial bee colony) algorithm. The performance of the proposed classifier is quantified through yale and AT&T dataset widely used in the face recognition.

Automatic Tagging Scheme for Plural Faces (다중 얼굴 태깅 자동화)

  • Lee, Chung-Yeon;Lee, Jae-Dong;Chin, Seong-Ah
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.47 no.3
    • /
    • pp.11-21
    • /
    • 2010
  • To aim at improving performance and reflecting user's needs of retrieval, the number of researches has been actively conducted in recent year as the quantity of information and generation of the web pages exceedingly increase. One of alternative approaches can be a tagging system. It makes users be able to provide a representation of metadata including writings, pictures, and movies etc. called tag and be convenient in use of retrieval of internet resources. Tags similar to keywords play a critical role in maintaining target pages. However, they still needs time consuming labors to annotate tags, which sometimes are found to be a hinderance caused by overuse of tagging. In this paper, we present an automatic tagging scheme for a solution of current tagging system conveying drawbacks and inconveniences. To realize the approach, face recognition-based tagging system on SNS is proposed by building a face area detection procedure, linear-based classification and boosting algorithm. The proposed novel approach of tagging service can increase possibilities that utilized SNS more efficiently. Experimental results and performance analysis are shown as well.