• Title/Summary/Keyword: 3차원 얼굴

Search Result 283, Processing Time 0.027 seconds

Real Time Discrimination of 3 Dimensional Face Pose (실시간 3차원 얼굴 방향 식별)

  • Kim, Tae-Woo
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.3 no.1
    • /
    • pp.47-52
    • /
    • 2010
  • In this paper, we introduce a new approach for real-time 3D face pose discrimination based on active IR illumination from a monocular view of the camera. Under the IR illumination, the pupils appear bright. We develop algorithms for efficient and robust detection and tracking pupils in real time. Based on the geometric distortions of pupils under different face orientations, an eigen eye feature space is built based on training data that captures the relationship between 3D face orientation and the geometric features of the pupils. The 3D face pose for an input query image is subsequently classified using the eigen eye feature space. From the experiment, we obtained the range of results of discrimination from the subjects which close to the camera are from 94,67%, minimum from 100%, maximum.

  • PDF

Real Time 3D Face Pose Discrimination Based On Active IR Illumination (능동적 적외선 조명을 이용한 실시간 3차원 얼굴 방향 식별)

  • 박호식;배철수
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.8 no.3
    • /
    • pp.727-732
    • /
    • 2004
  • In this paper, we introduce a new approach for real-time 3D face pose discrimination based on active IR illumination from a monocular view of the camera. Under the IR illumination, the pupils appear bright. We develop algorithms for efficient and robust detection and tracking pupils in real time. Based on the geometric distortions of pupils under different face orientations, an eigen eye feature space is built based on training data that captures the relationship between 3D face orientation and the geometric features of the pupils. The 3D face pose for an input query image is subsequently classified using the eigen eye feature space. From the experiment, we obtained the range of results of discrimination from the subjects which close to the camera are from 94,67%, minimum from 100%, maximum.

Realization of 3D Virtual Face Using two Sheets of 2D photographs (두 장의 2D 사진을 이용한 3D 가상 얼굴의 구현)

  • 임낙현;서경호;김태효
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.2 no.4
    • /
    • pp.16-21
    • /
    • 2001
  • In this paper a virtual form of 3 dimensional face is synthesized from the two sheets of 2 dimensional photographs In this case two sheets of 2D face photographs, the front and the side photographs are used First of all a standard model for a general face is created and from this model the feature points which represents a construction of face are densely defined on part of ears. eyes, a nose and a lip but the other parts. for example, forehead, chin and hair are roughly determined because of flat region or the less individual points. Thereafter the side photograph is connected symmetrically on the left and right sides of the front image and it is gradually synthesized by use of affine transformation method. In order to remove the difference of color and brightness from the junction part, a linear interpolation method is used. As a result it is confirmed that the proposed model which general model of a face can be obtain the 3D virtual image of the individual face.

  • PDF

3D Face Recognition using Longitudinal Section and Transection (종단면과 횡단면을 이용한 3차원 얼굴 인식)

  • 이영학;박건우;이태홍
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.9
    • /
    • pp.885-893
    • /
    • 2003
  • In this paper, a new practical implementation of a person verification system using features of longitudinal section and transection and other facial, rotation compensated 3D face image, is proposed. The approach works by finding the nose tip that has a protrusion shape on the face. In feature recognition of 3D face image, one has to take into consideration the orientated frontal posture to normalize. Next, the special points in regions, such as nose, eyes and mouth are detected. The depth of nose, the area of nose and the volume of nose based both on the 3 longitudinal section and a transection are calculated. The eye interval and mouth width are also computed. Finally, the 12 features on the face were extracted. The Ll measure for comparing two feature vectors were used, because it is simple and robust. In the experimental results, proposed method achieves recognition rate of 95.5% for the longitudinal section and transection.

3D Faces Reconstruction Using Structured Light Images (구조 광 영상을 이용한 3차원 얼굴 복원)

  • Lee, Duk-Ryong;Oh, Il-Seok
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2008.05a
    • /
    • pp.15-18
    • /
    • 2008
  • This paper proposes a method to reconstruct the 3-D face using structured light image. First of all, we suppose that each sight vector of a projector and camera are parallel. We project the structured light in the shape of lattice on the background to acquire the reference-structured light image. This image is used to calibrate the projector and camera. Since then, we acquire the face-structured light image which is projected the same structured light on the face. These two structured light images are used to reconstruct the 3-D face through the variation which is measured from the positional difference of feature vectors. In our experiment result, we could reconstruct the 3-D face image as recognize through these simple devices.

  • PDF

Realtime Facial Expression Control of 3D Avatar by PCA Projection of Motion Data (모션 데이터의 PCA투영에 의한 3차원 아바타의 실시간 표정 제어)

  • Kim Sung-Ho
    • Journal of Korea Multimedia Society
    • /
    • v.7 no.10
    • /
    • pp.1478-1484
    • /
    • 2004
  • This paper presents a method that controls facial expression in realtime of 3D avatar by having the user select a sequence of facial expressions in the space of facial expressions. The space of expression is created from about 2400 frames of facial expressions. To represent the state of each expression, we use the distance matrix that represents the distances between pairs of feature points on the face. The set of distance matrices is used as the space of expressions. Facial expression of 3D avatar is controled in real time as the user navigates the space. To help this process, we visualized the space of expressions in 2D space by using the Principal Component Analysis(PCA) projection. To see how effective this system is, we had users control facial expressions of 3D avatar by using the system. This paper evaluates the results.

  • PDF

3D Visualization using Face Position and Direction Tracking (얼굴 위치와 방향 추적을 이용한 3차원 시각화)

  • Kim, Min-Ha;Kim, Ji-Hyun;Kim, Cheol-Ki;Cha, Eui-Young
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2011.10a
    • /
    • pp.173-175
    • /
    • 2011
  • In this paper, we present an user interface which can show some 3D objects at various angles using tracked 3d head position and orientation. In implemented user interface, First, when user's head moves left/right (X-Axis) and up/down(Y-Axis), displayed objects are moved towards user's eyes using 3d head position. Second, when user's head rotate upon an X-Axis(pitch) or an Y-Axis(yaw), displayed objects are rotated by the same value as user's. The results of experiment from a variety of user's position and orientation show good accuracy and reactivity for 3d visualization.

  • PDF

A variation of face recognition rate according to the reduction of low dimension in PCA method (PCA 저차원 축소에 따른 조명 있는 얼굴의 인식률 변화)

  • Song, Young-Jun;Kim, Dong-Woo;Kim, Young-Gil;Kim, Nam
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2006.11a
    • /
    • pp.533-535
    • /
    • 2006
  • In this paper, we experiment a face recognition rate of the shaded faces except to low dimension feature vectors; first, second, third dimension. It is known to robust the face recognition against illumination. But, it isn't obvious what is effect to recognition in terms of low dimension. We are analysis to the effect of low dimension(first, second, third dimension, and combination of these) under the shaded faces.

  • PDF

Facial expression recognition based on pleasure and arousal dimensions (쾌 및 각성차원 기반 얼굴 표정인식)

  • 신영숙;최광남
    • Korean Journal of Cognitive Science
    • /
    • v.14 no.4
    • /
    • pp.33-42
    • /
    • 2003
  • This paper presents a new system for facial expression recognition based in dimension model of internal states. The information of facial expression are extracted to the three steps. In the first step, Gabor wavelet representation extracts the edges of face components. In the second step, sparse features of facial expressions are extracted using fuzzy C-means(FCM) clustering algorithm on neutral faces, and in the third step, are extracted using the Dynamic Model(DM) on the expression images. Finally, we show the recognition of facial expression based on the dimension model of internal states using a multi-layer perceptron. The two dimensional structure of emotion shows that it is possible to recognize not only facial expressions related to basic emotions but also expressions of various emotion.

  • PDF

Facial Feature Extraction using Nasal Masks from 3D Face Image (코 형상 마스크를 이용한 3차원 얼굴 영상의 특징 추출)

  • 김익동;심재창
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.4
    • /
    • pp.1-7
    • /
    • 2004
  • This paper proposes a new method for facial feature extraction, and the method could be used to normalize face images for 3D face recognition. 3D images are much less sensitive than intensity images at a source of illumination, so it is possible to recognize people individually. But input face images may have variable poses such as rotating, Panning, and tilting. If these variances ire not considered, incorrect features could be extracted. And then, face recognition system result in bad matching. So it is necessary to normalize an input image in size and orientation. It is general to use geometrical facial features such as nose, eyes, and mouth in face image normalization steps. In particular, nose is the most prominent feature in 3D face image. So this paper describes a nose feature extraction method using 3D nasal masks that are similar to real nasal shape.