• Title/Summary/Keyword: 3차원 얼굴

Search Result 283, Processing Time 0.026 seconds

Extracting 2D-Mesh from Structured Light Image for Reconstructing 3D Faces (3차원 얼굴 복원을 위한 구조 광 영상에서의 2차원 메쉬 추출)

  • Lee, Duk-Ryong;Oh, Il-Seok
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2007.11a
    • /
    • pp.248-251
    • /
    • 2007
  • In this paper, we are propose a method to estimate the 2-D mesh from structured light image for reconstruction of 3-D face image. To acquire the structured light image, we are project structured light on the face using the projector. we are extract the projected cross points from the acquire image. The 2-D mesh image is extracted from the position and angle of cross points. In the extraction processing, the error was fixed to extract the correct 2-D mesh.

  • PDF

Realtime Facial Expression Data Tracking System using Color Information (컬러 정보를 이용한 실시간 표정 데이터 추적 시스템)

  • Lee, Yun-Jung;Kim, Young-Bong
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.7
    • /
    • pp.159-170
    • /
    • 2009
  • It is very important to extract the expression data and capture a face image from a video for online-based 3D face animation. In recently, there are many researches on vision-based approach that captures the expression of an actor in a video and applies them to 3D face model. In this paper, we propose an automatic data extraction system, which extracts and traces a face and expression data from realtime video inputs. The procedures of our system consist of three steps: face detection, face feature extraction, and face tracing. In face detection, we detect skin pixels using YCbCr skin color model and verifies the face area using Haar-based classifier. We use the brightness and color information for extracting the eyes and lips data related facial expression. We extract 10 feature points from eyes and lips area considering FAP defined in MPEG-4. Then, we trace the displacement of the extracted features from continuous frames using color probabilistic distribution model. The experiments showed that our system could trace the expression data to about 8fps.

Wavelet based Fuzzy Integral System for 3D Face Recognition (퍼지적분을 이용한 웨이블릿 기반의 3차원 얼굴 인식)

  • Lee, Yeung-Hak;Shim, Jae-Chang
    • Journal of KIISE:Software and Applications
    • /
    • v.35 no.10
    • /
    • pp.616-626
    • /
    • 2008
  • The face shape extracted by the depth values has different appearance as the most important facial feature information and the face images decomposed into frequency subband are signified personal features in detail. In this paper, we develop a method for recognizing the range face images by combining the multiple frequency domains for each depth image and depth fusion using fuzzy integral. For the proposed approach, the first step tries to find the nose tip that has a protrusion shape on the face from the extracted face area. It is used as the reference point to normalize for orientated facial pose and extract multiple areas by the depth threshold values. In the second step, we adopt as features for the authentication problem the wavelet coefficient extracted from some wavelet subband to use feature information. The third step of approach concerns the application of eigenface and Linear Discriminant Analysis (LDA) method to reduce the dimension and classify. In the last step, the aggregation of the individual classifiers using the fuzzy integral is explained for extracted coefficient at each resolution level. In the experimental results, using the depth threshold value 60 (DT60) show the highest recognition rate among the regions, and the depth fusion method achieves 98.6% recognition rate, incase of fuzzy integral.

Facial Gaze Detection by Estimating Three Dimensional Positional Movements (얼굴의 3차원 위치 및 움직임 추정에 의한 시선 위치 추적)

  • Park, Gang-Ryeong;Kim, Jae-Hui
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.39 no.3
    • /
    • pp.23-35
    • /
    • 2002
  • Gaze detection is to locate the position on a monitor screen where a user is looking. In our work, we implement it with a computer vision system setting a single camera above a monitor and a user moves (rotates and/or translates) his face to gaze at a different position on the monitor. To detect the gaze position, we locate facial region and facial features(both eyes, nostrils and lip corners) automatically in 2D camera images. From the movement of feature points detected in starting images, we can compute the initial 3D positions of those features by camera calibration and parameter estimation algorithm. Then, when a user moves(rotates and/or translates) his face in order to gaze at one position on a monitor, the moved 3D positions of those features can be computed from 3D rotation and translation estimation and affine transform. Finally, the gaze position on a monitor is computed from the normal vector of the plane determined by those moved 3D positions of features. As experimental results, we can obtain the gaze position on a monitor(19inches) and the gaze position accuracy between the computed positions and the real ones is about 2.01 inches of RMS error.

Children's Interpretation of Facial Expression onto Two-Dimension Structure of Emotion (정서의 이차원 구조에서 유아의 얼굴표정 해석)

  • Shin, Young-Suk;Chung, Hyun-Sook
    • Korean Journal of Cognitive Science
    • /
    • v.18 no.1
    • /
    • pp.57-68
    • /
    • 2007
  • This study explores children's categories of emotion understanding from facial expressions onto two dimensional structure of emotion. Children of 89 from 3 to 5 years old were required to those facial expressions related the fourteen emotion terms. Facial expressions applied for experiment are used the photographs rated the degree of expression in each of the two dimensions (pleasure-displeasure dimension and arousal-sleep dimension) on a nine-point scale from 54 university students. The experimental results showed that children indicated the greater stability in arousal dimension than stability in pleasure-displeasure dimension. Emotions about sadness, sleepiness, anger and surprise onto two dimensions was understand very well, but emotions about fear, boredom were showed instability in pleasure-displeasure dimension. Specifically, 3 years old children indicated highly the perception in a degree of arousal-sleep than perception of pleasure-displeasure.

  • PDF

3D Face Recognition using Projection Vectors for the Area in Contour Lines (등고선 영역의 투영 벡터를 이용한 3차원 얼굴 인식)

  • 이영학;심재창;이태홍
    • Journal of Korea Multimedia Society
    • /
    • v.6 no.2
    • /
    • pp.230-239
    • /
    • 2003
  • This paper presents face recognition algorithm using projection vector reflecting local feature for the area in contour lines. The outline shape of a face has many difficulties to distinguish people because human has similar face shape. For 3 dimensional(3D) face images include depth information, we can extract different face shapes from the nose tip using some depth values for a face image. In this thesis deals with 3D face image, because the extraction of contour lines from 2 dimensional face images is hard work. After finding nose tip, we extract two areas in the contour lilies from some depth values from 3D face image which is obtained by 3D laser scanner. And we propose a method of projection vector to localize the characteristics of image and reduce the number of index data in database. Euclidean distance is used to compare of similarity between two images. Proposed algorithm can be made recognition rate of 94.3% for face shapes using depth information.

  • PDF

Generating Face Textures for 3D Avatars from Photos (실사 영상을 사용한 3차원 아바타 얼굴 텍스쳐 생성)

  • Kim, Dong-Hee;Yoon, Jong-Hyun;Park, Jong-Seung
    • Journal of Korea Game Society
    • /
    • v.8 no.1
    • /
    • pp.49-58
    • /
    • 2008
  • In this paper, we propose a texture generation scheme for 3D avatars from three or more human face photos. First, we manually mark image positions corresponding to vertices of a given UVW map. Then, a face texture is automatically generated from the photo images. The proposed texture generation scheme extremely reduces the amount of manual work compared with the classical methods such as Photoshop-based schemes. The generated textures are photorealistic since the textures fully reflect the naturalness of the original photos. The texture creation scheme can be applied to any kind of mesh structures of 3D models and mesh structures need not be changed to accommodate the given textures. We created face textures from several triplets of photos and mapped them to 3D avatar faces. Experimental results showed that visual realism of avatar faces is much enhanced by the face textures.

  • PDF

Efficiency Improvement on Face Recognition using Gabor Tensor (가버 텐서를 이용한 얼굴인식 성능 개선)

  • Park, Kyung-Jun;Ko, Hyung-Hwa
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.35 no.9C
    • /
    • pp.748-755
    • /
    • 2010
  • In this paper we propose an improved face recognition method using Gabor tensor. Gabor transform is known to be able to represent characteristic feature in face and reduced environmental influence. It may contribute to improve face recognition ratio. We attempted to combine three-dimensional tensor from Gabor transform with MPCA(Multilinear PCA) and LDA. MPCA with tensor which use various features is more effective than traditional one or two dimensional PCA. It is known to be robust to the change of face expression or light. Proposed method is simulated by MATALB9 using ORL and Yale face database. Test result shows that recognition ratio is improved maximum 9~27% compared with exisisting face recognition method.

Study On The Robustness Of Face Authentication Methods Under illumination Changes (얼굴인증 방법들의 조명변화에 대한 견인성 비교 연구)

  • Ko Dae-Young;Kim Jin-Young;Na Seung-You
    • The KIPS Transactions:PartB
    • /
    • v.12B no.1 s.97
    • /
    • pp.9-16
    • /
    • 2005
  • This paper focuses on the study of the face authentication system and the robustness of fact authentication methods under illumination changes. Four different face authentication methods are tried. These methods are as fellows; PCA(Principal Component Analysis), GMM(Gaussian Mixture Modeis), 1D HMM(1 Dimensional Hidden Markov Models), Pseudo 2D HMM(Pseudo 2 Dimensional Hidden Markov Models). Experiment results involving an artificial illumination change to fate images are compared with each other. Face feature vector extraction based on the 2D DCT(2 Dimensional Discrete Cosine Transform) if used. Experiments to evaluate the above four different fate authentication methods are carried out on the ORL(Olivetti Research Laboratory) face database. Experiment results show the EER(Equal Error Rate) performance degrade in ail occasions for the varying ${\delta}$. For the non illumination changes, Pseudo 2D HMM is $2.54{\%}$,1D HMM is $3.18{\%}$, PCA is $11.7{\%}$, GMM is $13.38{\%}$. The 1D HMM have the bettor performance than PCA where there is no illumination changes. But the 1D HMM have worse performance than PCA where there is large illumination changes(${\delta}{\geq}40$). For the Pseudo 2D HMM, The best EER performance is observed regardless of the illumination changes.

Lip Shape Synthesis of the Korean Syllable for Human Interface (휴먼인터페이스를 위한 한글음절의 입모양합성)

  • 이용동;최창석;최갑석
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.19 no.4
    • /
    • pp.614-623
    • /
    • 1994
  • Synthesizing speech and facial images is necessary for human interface that man and machine converse naturally as human do. The target of this paper is synthesizing the facial images. In synthesis of the facial images a three-dimensional (3-D) shape model of the face is used for realizating the facial expression variations and the lip shape variations. The various facial expressions and lip shapes harmonized with the syllables are synthesized by deforming the three-dimensional model on the basis of the facial muscular actions. Combications with the consonants and the vowels make 14.364 syllables. The vowels dominate most lip shapes but the consonants do a part of them. For determining the lip shapes, this paper investigates all the syllables and classifies the lip shapes pattern according to the vowels and the consonants. As the results, the lip shapes are classified into 8 patterns for the vowels and 2patterns for the consonants. In advance, the paper determines the synthesis rules for the classified lip shape patterns. This method permits us to obtain the natural facial image with the various facial expressions and lip shape patterns.

  • PDF