• Title/Summary/Keyword: 3D 얼굴모델

Search Result 144, Processing Time 0.027 seconds

Face Pose Estimation using Stereo Image (스테레오 영상을 이용한 얼굴 포즈 추정)

  • So, In-Mi;Kang, Sun-Kyung;Kim, Young-Un;Lee, Chi-Geun;Jung, Sung-Tae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.11 no.3
    • /
    • pp.151-159
    • /
    • 2006
  • In this paper. we Present an estimation method of a face pose by using two camera images. First, it finds corresponding facial feature points of eyebrow, eye and lip from two images After that, it computes three dimensional location of the facial feature points by using the triangulation method of stereo vision techniques. Next. it makes a triangle by using the extracted facial feature points and computes the surface normal vector of the triangle. The surface normal of the triangle represents the direction of the face. We applied the computed face pose to display a 3D face model. The experimental results show that the proposed method extracts correct face pose.

  • PDF

Analysis and Synthesis of Facial Expression using Base Faces (기준얼굴을 이용한 얼굴표정 분석 및 합성)

  • Park, Moon-Ho;Ko, Hee-Dong;Byun, Hye-Ran
    • Journal of KIISE:Software and Applications
    • /
    • v.27 no.8
    • /
    • pp.827-833
    • /
    • 2000
  • Facial expression is an effective tool to express human emotion. In this paper, a facial expression analysis method based on the base faces and their blending ratio is proposed. The seven base faces were chosen as axes describing and analyzing arbitrary facial expression. We set up seven facial expressions such as, surprise, fear, anger, disgust, happiness, sadness, and expressionless as base faces. Facial expression was built by fitting generic 3D facial model to facial image. Two comparable methods, Genetic Algorithms and Simulated Annealing were used to search the blending ratio of base faces. The usefulness of the proposed method for facial expression analysis was proved by the facial expression synthesis results.

  • PDF

Automatic Generation of 3D Face Model from Trinocular Images (Trinocular 영상을 이용한 3D 얼굴 모델 자동 생성)

  • Yi, Kwang-Do;Ahn, Sang-Chul;Kwon, Yong-Moo;Ko, Han-Seok;Kim, Hyoung-Gon
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.36S no.7
    • /
    • pp.104-115
    • /
    • 1999
  • This paper proposes an efficient method for 3D modeling of a human face from trinocular images by reconstructing face surface using range data. By using a trinocular camera system, we mitigated the tradeoff between the occlusion problem and the range resolution limitation which is the critical limitation in binocular camera system. We also propose an MPC_MBS (Matching Pixel Count Multiple Baseline Stereo) area-based matching method to reduce boundary overreach phenomenon and to improve both of accuracy and precision in matching. In this method, the computing time can be reduced significantly by removing the redundancies. In the model generation sub-pixel accurate surface data are achieved by 2D interpolation of disparity values, and are sampled to make regular triangular meshes. The data size of the triangular mesh model can be controlled by merging the vertices that lie on the same plane within user defined error threshold.

  • PDF

Eye Tracking and synthesize for MPEG-4 Coding (MPEG-4 코딩을 위한 눈 추적과 애니메이션)

  • Park, Dong-Hee;Bae, Cheol-Soo;Na, Sang-Dong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2002.04a
    • /
    • pp.741-744
    • /
    • 2002
  • 본 논문에서는 3D 모델의 눈 변형을 계산하기 위해 검출된 눈 형태를 이용한 눈 움직임 합성 방법을 제안하였다. 얼굴 특징들의 정확한 위치 측정과 추적은 MPEG-4 코딩 시스템을 기반으로 한 고품질 모델 개발에 중요하다. 매우 낮은 비트율의 영상회의 응용에서 시간의 경과에 따라 눈과 입술의 움직임을 정확히 추적하기 위해 얼굴 특징들의 정확한 위치 측정과 추적이 필요하다. 이들의 움직임은 코딩되어지고 원격지로 전송되어 질 수 있다. 애니메이션 기술은 얼굴 모델에서 움직임을 합성하는데 이용되어진다. 본 논문에서는 얼굴 특징 검출과 추적 알고리즘으로 잘 알려지고, 효과적으로 향상시킬 수 있는 휴리스틱 방법을 제안하겠다. 본 논문에서는 눈 움직임의 검출뿐만 아니라 추적, 모델링에도 초점을 두었다.

  • PDF

Pose-invariant Face Recognition using a Cylindrical Model and Stereo Camera (원통 모델과 스테레오 카메라를 이용한 포즈 변화에 강인한 얼굴인식)

  • 노진우;홍정화;고한석
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.7
    • /
    • pp.929-938
    • /
    • 2004
  • This paper proposes a pose-invariant face recognition method using cylindrical model and stereo camera. We divided this paper into two parts. One is single input image case, the other is stereo input image case. In single input image case, we normalized a face's yaw pose using cylindrical model, and in stereo input image case, we normalized a face's pitch pose using cylindrical model with previously estimated pitch pose angle by the stereo geometry. Also, since we have an advantage that we can utilize two images acquired at the same time, we can increase overall recognition performance by decision-level fusion. Through representative experiments, we achieved an increased recognition rate from 61.43% to 94.76% by the yaw pose transform, and the recognition rate with the proposed method achieves as good as that of the more complicated 3D face model. Also, by using stereo camera system we achieved an increased recognition rate 5.24% more for the case of upper face pose, and 3.34% more by decision-level fusion.

Face Replacement under Different Illumination Condition (다른 조명 환경을 갖는 영상 간의 얼굴 교체 기술)

  • Song, Joongseok;Zhang, Xingjie;Park, Jong-Il
    • Journal of Broadcast Engineering
    • /
    • v.20 no.4
    • /
    • pp.606-618
    • /
    • 2015
  • Computer graphics(CG) is being important technique in media contents such as movie and TV. Especially, face replacement technique which replaces the faces between different images have been studied as a typical technology of CG by academia and researchers for a long time. In this paper, we propose the face replacement method between target and reference images under different illumination environment without 3D model. In experiments, we verified that the proposed method could naturally replace the faces between reference and target images under different illumination condition.

Face Recognition Robust to Pose Variations (포즈 변화에 강인한 얼굴 인식)

  • 노진우;문인혁;고한석
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.5
    • /
    • pp.63-69
    • /
    • 2004
  • This paper proposes a novel method for achieving pose-invariant face recognition using cylindrical model. On the assumption that a face is shaped like that of a cylinder, we estimate the object's pose and then extract the frontal face image via a pose transform with previously estimated pose angle. By employing the proposed pose transform technique we can increase the face recognition performance using the frontal face images. Through representative experiments, we achieved an increased recognition rate from 61.43% to 94.76% by the pose transform. Additionally, the recognition rate with the proposed method achieves as good as that of the more complicated 3D face model.

Lip Shape Synthesis of the Korean Syllable for Human Interface (휴먼인터페이스를 위한 한글음절의 입모양합성)

  • 이용동;최창석;최갑석
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.19 no.4
    • /
    • pp.614-623
    • /
    • 1994
  • Synthesizing speech and facial images is necessary for human interface that man and machine converse naturally as human do. The target of this paper is synthesizing the facial images. In synthesis of the facial images a three-dimensional (3-D) shape model of the face is used for realizating the facial expression variations and the lip shape variations. The various facial expressions and lip shapes harmonized with the syllables are synthesized by deforming the three-dimensional model on the basis of the facial muscular actions. Combications with the consonants and the vowels make 14.364 syllables. The vowels dominate most lip shapes but the consonants do a part of them. For determining the lip shapes, this paper investigates all the syllables and classifies the lip shapes pattern according to the vowels and the consonants. As the results, the lip shapes are classified into 8 patterns for the vowels and 2patterns for the consonants. In advance, the paper determines the synthesis rules for the classified lip shape patterns. This method permits us to obtain the natural facial image with the various facial expressions and lip shape patterns.

  • PDF

Interactive Haptic Deformation and Material Property Modeling Algorithm (인터랙티브 햅틱 변형 및 재질감 모델링 알고리즘)

  • Lee, Beom-Chan;Kim, Jong-Phil;Park, Hye-Shin;Ryu, Je-Ha
    • 한국HCI학회:학술대회논문집
    • /
    • 2007.02a
    • /
    • pp.1-7
    • /
    • 2007
  • 본 논문은 3차원 스캐너로 획득된 실제 얼굴 데이터를 햅틱 상호작용을 통해 직접 변형하고 재질감을 모델링 하는 알고리즘을 제안한다. 제안된 알고리즘은 그래픽 하드웨어 기반의 햅틱 렌더링 알고리즘을 기반으로 획득된 2.5D 얼굴 데이터를 mass-spring 모델을 적용하여 변형하고 얼굴의 재질감(탄성, 마찰, 거칠기) 정보를 모델링 하는 것이다. 햅틱 장치를 이용한 변형알고리즘은 변형 시 효율적인 변형 영역 탐색을 위하여 공간 분할방법인 k-d 트리 구조를 이용하여 최근방 탐색 알고리즘을 구현하였으며, 사실적인 힘 계산을 위하여 각 포인트 마다 mass-spring 모델을 적용하여 반력 연산 및 물체의 변형을 수행하였다. 아울러 재질감을 모델링 하기 위해 깊이 이미지 기반 표현(Depth Image Based Representation, DIBR)을 이용하여 가상 물체의 거칠기, 탄성, 및 마찰을 편집할 수 있는 방법론을 제시하고, 편집된 재질감을 직접 물체의 표면에 적용하여 렌더링 하는 알고리즘을 제안한다.

  • PDF

3-D Facial Animation on the PDA via Automatic Facial Expression Recognition (얼굴 표정의 자동 인식을 통한 PDA 상에서의 3차원 얼굴 애니메이션)

  • Lee Don-Soo;Choi Soo-Mi;Kim Hae-Hwang;Kim Yong-Guk
    • The KIPS Transactions:PartB
    • /
    • v.12B no.7 s.103
    • /
    • pp.795-802
    • /
    • 2005
  • In this paper, we present a facial expression recognition-synthesis system that recognizes 7 basic emotion information automatically and renders face with non-photorelistic style in PDA For the recognition of the facial expressions, first we need to detect the face area within the image acquired from the camera. Then, a normalization procedure is applied to it for geometrical and illumination corrections. To classify a facial expression, we have found that when Gabor wavelets is combined with enhanced Fisher model the best result comes out. In our case, the out put is the 7 emotional weighting. Such weighting information transmitted to the PDA via a mobile network, is used for non-photorealistic facial expression animation. To render a 3-D avatar which has unique facial character, we adopted the cartoon-like shading method. We found that facial expression animation using emotional curves is more effective in expressing the timing of an expression comparing to the linear interpolation method.