• Title/Summary/Keyword: 3차원 얼굴

Search Result 283, Processing Time 0.031 seconds

Face Relighting Based on Virtual Irradiance Sphere and Reflection Coefficients (가상 복사조도 반구와 반사계수에 근거한 얼굴 재조명)

  • Han, Hee-Chul;Sohn, Kwang-Hoon
    • Journal of Broadcast Engineering
    • /
    • v.13 no.3
    • /
    • pp.339-349
    • /
    • 2008
  • We present a novel method to estimate the light source direction and relight a face texture image of a single 3D model under arbitrary unknown illumination conditions. We create a virtual irradiance sphere to detect the light source direction from a given illuminated texture image using both normal vector mapping and weighted bilinear interpolation. We then induce a relighting equation with estimated ambient and diffuse coefficients. We provide the result of a series of experiments on light source estimation, relighting and face recognition to show the efficiency and accuracy of the proposed method in restoring the shading and shadows areas of a face texture image. Our approach for face relighting can be used for not only illuminant invariant face recognition applications but also reducing visual load and Improving visual performance in tasks using 3D displays.

Edge Watermarking of 3-Dimensional Shape Recognition System (3차원 형상 인식 시스템에서의 에지 워터마킹)

  • 윤재식;유상욱;성택영;김희정;권성근;이응주;권기룡
    • Proceedings of the Korea Multimedia Society Conference
    • /
    • 2004.05a
    • /
    • pp.163-166
    • /
    • 2004
  • 본 논문은 3차원 형상 인식시스템으로부터 스캔 한 3차일 영상 데이터의 깊이정보에 3차원 에지를 추출하여 워터마크를 삽입하는 알고리즘을 제안한다. 제안한 알고리즘에서는 3차원 수직 평형 형상 인식기로 object scanning을 한 데이터 값들을 추출한다. 이 추출된 값들의 특성은 2차원 영상 즉 x, y축에 각각의 픽셀에 깊이정보를 가지는 3차원영상으로서 기존의 3차원영상과는 다른 차이를 가지며 영상의 품질이 우수하며 많은vertex 정보와 메쉬 정보를 가지고 있다. 따라서 획득된 데이터에서 x좌표와 y좌표는 영상에 있어서 위치를 나타내는 정보이고, T좌표는 3차원영상을 형성하는 깊이 정보들이다. 3차원 형상 인식시스템에서 스캔 한 3차원 얼굴영상으로부터 에지를 검출하여 에지가 존재하는 위치에 워터마크를 삽입하는 알고리즘을 제안하였다. 본 논문에서 제안한 워터마킹 알고리즘의 성능 평가를 위한 모의실험 한 결과 워터마크가 삽입된 모텔의 절단(cropping), 리메쉬(remesh) 및 메쉬간소화(mesh simplification) 공격에 대한 견고성이 우수함을 확인함으로써 3차원형상 인식 시스템에 직접적인 워터마크 삽입이 가능함을 증명하였다.

  • PDF

Facial Feature Extraction Using Energy Probability in Frequency Domain (주파수 영역에서 에너지 확률을 이용한 얼굴 특징 추출)

  • Choi Jean;Chung Yns-Su;Kim Ki-Hyun;Yoo Jang-Hee
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.4 s.310
    • /
    • pp.87-95
    • /
    • 2006
  • In this paper, we propose a novel feature extraction method for face recognition, based on Discrete Cosine Transform (DCT), Energy Probability (EP), and Linear Discriminant Analysis (LDA). We define an energy probability as magnitude of effective information and it is used to create a frequency mask in OCT domain. The feature extraction method consists of three steps; i) the spatial domain of face images is transformed into the frequency domain called OCT domain; ii) energy property is applied on DCT domain that acquire from face image for the purpose of dimension reduction of data and optimization of valid information; iii) in order to obtain the most significant and invariant feature of face images, LDA is applied to the data extracted using frequency mask. In experiments, the recognition rate is 96.8% in ETRI database and 100% in ORL database. The proposed method has been shown improvements on the dimension reduction of feature space and the face recognition over the previously proposed methods.

3-D Facial Animation on the PDA via Automatic Facial Expression Recognition (얼굴 표정의 자동 인식을 통한 PDA 상에서의 3차원 얼굴 애니메이션)

  • Lee Don-Soo;Choi Soo-Mi;Kim Hae-Hwang;Kim Yong-Guk
    • The KIPS Transactions:PartB
    • /
    • v.12B no.7 s.103
    • /
    • pp.795-802
    • /
    • 2005
  • In this paper, we present a facial expression recognition-synthesis system that recognizes 7 basic emotion information automatically and renders face with non-photorelistic style in PDA For the recognition of the facial expressions, first we need to detect the face area within the image acquired from the camera. Then, a normalization procedure is applied to it for geometrical and illumination corrections. To classify a facial expression, we have found that when Gabor wavelets is combined with enhanced Fisher model the best result comes out. In our case, the out put is the 7 emotional weighting. Such weighting information transmitted to the PDA via a mobile network, is used for non-photorealistic facial expression animation. To render a 3-D avatar which has unique facial character, we adopted the cartoon-like shading method. We found that facial expression animation using emotional curves is more effective in expressing the timing of an expression comparing to the linear interpolation method.

Face Representation Method Using Pixel-to-Vertex Map(PVM) for 3D Model Based Face Recognition (3차원 얼굴인식을 위한 픽셀 대 정점 맵 기반 얼굴 표현방법)

  • Moon, Hyeon-Jun;Jeong, Kang-Hun;Hong, Tae-Hwa
    • Proceedings of the IEEK Conference
    • /
    • 2006.06a
    • /
    • pp.1031-1032
    • /
    • 2006
  • A 3D model based face recognition system is generally inefficient in computation time because 3D face model consists of a large number of vertices. In this paper, we propose a novel 3D face representation algorithm to reduce the number of vertices and optimize its computation time.

  • PDF

Improvement of Face Components Detection using Neck Removal (목 부분의 제거를 통한 얼굴 검출 향상 기법)

  • Yoon, Ga-Rim;Yoon, Yo-Sup;Kim, Young-Bong
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2004.11a
    • /
    • pp.321-326
    • /
    • 2004
  • Many researchers have been studied texturing the 3D face model with front and side pictures of ordinary person. It is very important to exactly detect the psition of eyes, nose, mouth of a human from the side pictures. Previous results first found the position of eye, nose, or mouth and then extract the other face components using their positional correlation. The detection results greatly depend on the correct extraction of the neck from the images. Therefore, we present a new algorithm that remove the neck completely and thus improve the detection rates of face components. To do this, we will use the RGB values and its differences.

  • PDF

Face Recognition Robust to Pose Variations (포즈 변화에 강인한 얼굴 인식)

  • 노진우;문인혁;고한석
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.5
    • /
    • pp.63-69
    • /
    • 2004
  • This paper proposes a novel method for achieving pose-invariant face recognition using cylindrical model. On the assumption that a face is shaped like that of a cylinder, we estimate the object's pose and then extract the frontal face image via a pose transform with previously estimated pose angle. By employing the proposed pose transform technique we can increase the face recognition performance using the frontal face images. Through representative experiments, we achieved an increased recognition rate from 61.43% to 94.76% by the pose transform. Additionally, the recognition rate with the proposed method achieves as good as that of the more complicated 3D face model.

Facial Expression Recognition using ICA-Factorial Representation Method (ICA-factorial 표현법을 이용한 얼굴감정인식)

  • Han, Su-Jeong;Kwak, Keun-Chang;Go, Hyoun-Joo;Kim, Sung-Suk;Chun, Myung-Geun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.13 no.3
    • /
    • pp.371-376
    • /
    • 2003
  • In this paper, we proposes a method for recognizing the facial expressions using ICA(Independent Component Analysis)-factorial representation method. Facial expression recognition consists of two stages. First, a method of Feature extraction transforms the high dimensional face space into a low dimensional feature space using PCA(Principal Component Analysis). And then, the feature vectors are extracted by using ICA-factorial representation method. The second recognition stage is performed by using the Euclidean distance measure based KNN(K-Nearest Neighbor) algorithm. We constructed the facial expression database for six basic expressions(happiness, sadness, angry, surprise, fear, dislike) and obtained a better performance than previous works.

A Study on Facial Modeling using Implicit Primitive (음함수 프리미티브를 이용한 얼굴모델링에 대한 연구)

  • Lee Hyun-Cheol;Song Yong-Kyu;Kim Eun-Seok;Hur Gi-Taek
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2005.11a
    • /
    • pp.466-469
    • /
    • 2005
  • Recently, in computer graphics, researches on 3D animations have been very active. One of the important research areas in 3D animation is animation of human being. Implicit surface model is convient for modeling objects composed of complicated surface such as 3D characters and liquids. Moreover, it can represent various forms of surfaces using a relatively small amount of data. In this paper, we propose a method of facial model generation using Implicit Primitive.

  • PDF

Face Recognition using LDA and Local MLP (LDA와 Local MLP를 이용한 얼굴 인식)

  • Lee Dae-Jong;Choi Gee-Seon;Cho Jae-Hoon;Chun Myung-Geun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.16 no.3
    • /
    • pp.367-371
    • /
    • 2006
  • Multilayer percepteon has the advantage of learning their optimal parameters and efficiency. However, MLP shows some drawbacks when dealing with high dimensional data within the input space. Also, it Is very difficult to find the optimal parameters when the input data are highly correlated such as large scale face dataset. In this paper, we propose a novel technique for face recognition based on LDA and local MLP. To resolve the main drawback of MLP, we calculate the reduced features by LDA in advance. And then, we construct a local MLP per group consisting of subset of facedatabase to find its optimal learning parameters rather than using whole faces. Finally, we designed the face recognition system combined with the local MLPs. From various experiments, we obtained better classification performance in comparison with the results produced by conventional methods such as PCA and LDA.