• Title/Summary/Keyword: 3차원 얼굴

Search Result 283, Processing Time 0.025 seconds

A 3D Face Reconstruction Based on the Symmetrical Characteristics of Side View 2D Face Images (측면 2차원 얼굴 영상들의 대칭성을 이용한 3차원 얼굴 복원)

  • Lee, Sung-Joo;Park, Kang-Ryoung;Kim, Jai-Hie
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.1
    • /
    • pp.103-110
    • /
    • 2011
  • A widely used 3D face reconstruction method, structure from motion(SfM), shows robust performance when frontal, left, and right face images are used. However, this method cannot reconstruct a self-occluded facial part correctly when only one side view face images are used because only partial facial feature points can be used in this case. In order to solve the problem, the proposed method exploit a constrain that is bilateral symmetry of human faces in order to generate bilateral facial feature points and use both input facial feature points and generated facial feature points to reconstruct a 3D face. For quantitative evaluation of the proposed method, 3D faces were obtained from a 3D face scanner and compared with the reconstructed 3D faces. The experimental results show that the proposed 3D face reconstruction method based on both facial feature points outperforms the previous 3D face reconstruction method based on only partial facial feature points.

Design of Face Recognition Algorithm based Optimized pRBFNNs Using Three-dimensional Scanner (최적 pRBFNNs 패턴분류기 기반 3차원 스캐너를 이용한 얼굴인식 알고리즘 설계)

  • Ma, Chang-Min;Yoo, Sung-Hoon;Oh, Sung-Kwun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.22 no.6
    • /
    • pp.748-753
    • /
    • 2012
  • In this paper, Face recognition algorithm is designed based on optimized pRBFNNs pattern classifier using three-dimensional scanner. Generally two-dimensional image-based face recognition system enables us to extract the facial features using gray-level of images. The environmental variation parameters such as natural sunlight, artificial light and face pose lead to the deterioration of the performance of the system. In this paper, the proposed face recognition algorithm is designed by using three-dimensional scanner to overcome the drawback of two-dimensional face recognition system. First face shape is scanned using three-dimensional scanner and then the pose of scanned face is converted to front image through pose compensation process. Secondly, data with face depth is extracted using point signature method. Finally, the recognition performance is confirmed by using the optimized pRBFNNs for solving high-dimensional pattern recognition problems.

3D Facial Modeling and Synthesis System for Realistic Facial Expression (자연스러운 표정 합성을 위한 3차원 얼굴 모델링 및 합성 시스템)

  • 심연숙;김선욱;한재현;변혜란;정창섭
    • Korean Journal of Cognitive Science
    • /
    • v.11 no.2
    • /
    • pp.1-10
    • /
    • 2000
  • Realistic facial animation research field which communicates with human and computer using face has increased recently. The human face is the part of the body we use to recognize individuals and the important communication channel that understand the inner states like emotion. To provide the intelligent interface. computer facial animation looks like human in talking and expressing himself. Facial modeling and animation research is focused on realistic facial animation recently. In this article, we suggest the method of facial modeling and animation for realistic facial synthesis. We can make a 3D facial model for arbitrary face by using generic facial model. For more correct and real face, we make the Korean Generic Facial Model. We can also manipulate facial synthesis based on the physical characteristics of real facial muscle and skin. Many application will be developed such as teleconferencing, education, movies etc.

  • PDF

Facial Feature Localization from 3D Face Image using Adjacent Depth Differences (인접 부위의 깊이 차를 이용한 3차원 얼굴 영상의 특징 추출)

  • 김익동;심재창
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.5
    • /
    • pp.617-624
    • /
    • 2004
  • This paper describes a new facial feature localization method that uses Adjacent Depth Differences(ADD) in 3D facial surface. In general, human recognize the extent of deepness or shallowness of region relatively, in depth, by comparing the neighboring depth information among regions of an object. The larger the depth difference between regions shows, the easier one can recognize each region. Using this principal, facial feature extraction will be easier, more reliable and speedy. 3D range images are used as input images. And ADD are obtained by differencing two range values, which are separated at a distance coordinate, both in horizontal and vertical directions. ADD and input image are analyzed to extract facial features, then localized a nose region, which is the most prominent feature in 3D facial surface, effectively and accurately.

A Three-Dimensional Facial Modeling and Prediction System (3차원 얼굴 모델링과 예측 시스템)

  • Gu, Bon-Gwan;Jeong, Cheol-Hui;Cho, Sun-Young;Lee, Myeong-Won
    • Journal of the Korea Computer Graphics Society
    • /
    • v.17 no.1
    • /
    • pp.9-16
    • /
    • 2011
  • In this paper, we describe the development of a system for generating a 3-dimensional human face and predicting it's appearance as it ages over subsequent years using 3D scanned facial data and photo images. It is composed of 3-dimensional texture mapping functions, a facial definition parameter input tool, and 3-dimensional facial prediction algorithms. With the texture mapping functions, we can generate a new model of a given face at a specified age using a scanned facial model and photo images. The texture mapping is done using three photo images - a front and two side images of a face. The facial definition parameter input tool is a user interface necessary for texture mapping and used for matching facial feature points between photo images and a 3D scanned facial model in order to obtain material values in high resolution. We have calculated material values for future facial models and predicted future facial models in high resolution with a statistical analysis using 100 scanned facial models.

Feature-Based Deformation of 3D Facial Model Using Radial Basis Function (Radial Basis Function 을 이용한 특징점 기반 3 차원 얼굴 모델의 변형)

  • Kwon Oh-Ryun;Min Kyong-Pil;Chun Jun-Chul
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2006.05a
    • /
    • pp.715-718
    • /
    • 2006
  • 아바타를 이용한 얼굴 애니메이션은 가상 현실이나 엔터테인먼트와 같은 분야에서 많이 적용된다. 얼굴 애니메이션을 생성하는 방법에는 크게 3 차원 모델을 직접 변형시키는 기하학적인 변형 방법과 2 차원 이미지의 워핑이나 모핑방법을 이용한 이미지 변형 방법이 있다. 기하학적인 변형 방법 중 3 차원 모델을 변형시키기 위한 방법으로 RBF(Radial Basis Function)을 이용하는 방법이 있다. RBF 함수를 이용하여 모델의 부드러운 변형을 만들 수 있다. 이 방법은 모델의 임의의 한 점을 이동하게 되면 영향을 받는 정점들을 좀 더 자연스럽게 이동시킴으로써 자연스러운 애니메이션을 생성할 수 있다. 본 연구에서는 RBF 를 이용하여 3 차원 얼굴 메쉬 모델의 기하학적 변형을 통해 모델의 얼굴 표정을 생성하는 방법에 대해 제안하고자 한다. 얼굴 모델 변형을 위해 얼굴의 특징인 눈, 입, 턱 부분에 특징점을 정하고 각 특징점에 따라 영향을 받는 영역을 정하기 위해 얼굴 모델을 지역적으로 클러스터링한다. 각 특징점에 따라 영향을 받는 영역에 대해 클러스터링을 적용하고 RBF 를 이용하여 자연스러운 얼굴 표정을 생성하는 방법을 제안한다.

  • PDF

3D Face Modeling Using Mesh Simplification (메쉬 간략화를 이용한 3차원 얼굴모델링)

  • 이현철;허기택
    • The Journal of the Korea Contents Association
    • /
    • v.3 no.4
    • /
    • pp.69-76
    • /
    • 2003
  • Recently, in computer graphics, researches on 3D animations have been very active. one of the important research areas in 3D animation is animation of human being. The creation and animation of 3D facial models has depended on animators' manual work frame by frame. Thus, it needs many efforts and time as well as various hardwares and softwares. In this paper, we implements a way to generation 3D human face model easily and quickly just with the front face images. Then, we suggests a methodology for mesh data simplification of 3D generic model.

  • PDF

A Gaze Detection Technique Using a Monocular Camera System (단안 카메라 환경에서의 시선 위치 추적)

  • 박강령;김재희
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.26 no.10B
    • /
    • pp.1390-1398
    • /
    • 2001
  • 시선 위치 추적이란 사용자가 모니터 상의 어느 지점을 쳐다보고 있는 지를 파악해 내는 기술이다. 시선 위치를 파악하기 위해 본 논문에서는 2차원 카메라 영상으로부터 얼굴 영역 및 얼굴 특징점을 추출한다. 초기에 모니터상의 3 지점을 쳐다볼 때 얼굴 특징점들은 움직임의 변화를 나타내며, 이로부터 카메라 보정 및 매개변수 추정 방법을 이용하여 얼굴특징점의 3차원 위치를 추정한다. 이후 사용자가 모니터 상의 또 다른 지점을 쳐다볼 때 얼굴 특징점의 변화된 3차원 위치는 3차원 움직임 추정방법 및 아핀변환을 이용하여 구해낸다. 이로부터 변화된 얼굴 특징점 및 이러한 얼굴 특징점으로 구성된 얼굴평면이 구해지며, 이러한 평면의 법선으로부터 모니터 상의 시선위치를 구할 수 있다. 실험 결과 19인치 모니터를 사용하여 모니터와 사용자까지의 거리를 50∼70cm정도 유지하였을 때 약 2.08인치의 시선위치에러 성능을 얻었다. 이 결과는 Rikert의 논문에서 나타낸 시선위치추적 성능(5.08cm 에러)과 비슷한 결과를 나타낸다. 그러나 Rikert의 방법은 모니터와 사용자 얼굴까지의 거리는 항상 고정시켜야 한다는 단점이 있으며, 얼굴의 자연스러운 움직임(회전 및 이동)이 발생하는 경우 시선위치추적 에러가 증가되는 문제점이 있다. 동시에 그들의 방법은 사용자 얼굴의 뒤 배경에 복잡한 물체가 없는 것으로 제한조건을 두고 있으며 처리 시간이 상당히 오래 걸리는 문제점이 있다. 그러나 본 논문에서 제안하는 시선 위치 추적 방법은 배경이 복잡한 사무실 환경에서도 사용가능하며, 약 3초 이내의 처리 시간(200MHz Pentium PC)이 소요됨을 알 수 있었다.

  • PDF

Facial Features and Motion Recovery using multi-modal information and Paraperspective Camera Model (다양한 형식의 얼굴정보와 준원근 카메라 모델해석을 이용한 얼굴 특징점 및 움직임 복원)

  • Kim, Sang-Hoon
    • The KIPS Transactions:PartB
    • /
    • v.9B no.5
    • /
    • pp.563-570
    • /
    • 2002
  • Robust extraction of 3D facial features and global motion information from 2D image sequence for the MPEG-4 SNHC face model encoding is described. The facial regions are detected from image sequence using multi-modal fusion technique that combines range, color and motion information. 23 facial features among the MPEG-4 FDP (Face Definition Parameters) are extracted automatically inside the facial region using color transform (GSCD, BWCD) and morphological processing. The extracted facial features are used to recover the 3D shape and global motion of the object using paraperspective camera model and SVD (Singular Value Decomposition) factorization method. A 3D synthetic object is designed and tested to show the performance of proposed algorithm. The recovered 3D motion information is transformed into global motion parameters of FAP (Face Animation Parameters) of the MPEG-4 to synchronize a generic face model with a real face.

Extraction and Implementation of MPEG-4 Facial Animation Parameter for Web Application (웹 응용을 위한 MPEC-4 얼굴 애니메이션 파라미터 추출 및 구현)

  • 박경숙;허영남;김응곤
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.6 no.8
    • /
    • pp.1310-1318
    • /
    • 2002
  • In this study, we developed a 3D facial modeler and animator that will not use the existing method by 3D scanner or camera. Without expensive image-input equipments, we can easily create 3D models only using front and side images. The system is available to animate 3D facial models as we connect to animation server on the WWW which is independent from specific platforms and softwares. It was implemented using Java 3D API. The facial modeler detects MPEG-4 FDP(Facial Definition Parameter) feature points from 2D input images, creates 3D facial model modifying generic facial model with the points. The animator animates and renders the 3D facial model according to MPEG-4 FAP(Facial Animation Parameter). This system can be used for generating an avatar on WWW.