• Title/Summary/Keyword: 3D 얼굴 모델 생성 시스템

Search Result 29, Processing Time 0.024 seconds

A 3D Face Reconstruction Method Robust to Errors of Automatic Facial Feature Point Extraction (얼굴 특징점 자동 추출 오류에 강인한 3차원 얼굴 복원 방법)

  • Lee, Youn-Joo;Lee, Sung-Joo;Park, Kang-Ryoung;Kim, Jai-Hie
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.1
    • /
    • pp.122-131
    • /
    • 2011
  • A widely used single image-based 3D face reconstruction method, 3D morphable shape model, reconstructs an accurate 3D facial shape when 2D facial feature points are correctly extracted from an input face image. However, in the case that a user's cooperation is not available such as a real-time 3D face reconstruction system, this method can be vulnerable to the errors of automatic facial feature point extraction. In order to solve this problem, we automatically classify extracted facial feature points into two groups, erroneous and correct ones, and then reconstruct a 3D facial shape by using only the correctly extracted facial feature points. The experimental results showed that the 3D reconstruction performance of the proposed method was remarkably improved compared to that of the previous method which does not consider the errors of automatic facial feature point extraction.

Automatic Anticipation Generation for 3D Facial Animation (3차원 얼굴 표정 애니메이션을 위한 기대효과의 자동 생성)

  • Choi Jung-Ju;Kim Dong-Sun;Lee In-Kwon
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.32 no.1
    • /
    • pp.39-48
    • /
    • 2005
  • According to traditional 2D animation techniques, anticipation makes an animation much convincing and expressive. We present an automatic method for inserting anticipation effects to an existing facial animation. Our approach assumes that an anticipatory facial expression can be found within an existing facial animation if it is long enough. Vertices of the face model are classified into a set of components using principal components analysis directly from a given hey-framed and/or motion -captured facial animation data. The vortices in a single component will have similar directions of motion in the animation. For each component, the animation is examined to find an anticipation effect for the given facial expression. One of those anticipation effects is selected as the best anticipation effect, which preserves the topology of the face model. The best anticipation effect is automatically blended with the original facial animation while preserving the continuity and the entire duration of the animation. We show experimental results for given motion-captured and key-framed facial animations. This paper deals with a part of broad subject an application of the principles of traditional 2D animation techniques to 3D animation. We show how to incorporate anticipation into 3D facial animation. Animators can produce 3D facial animation with anticipation simply by selecting the facial expression in the animation.

The Facial Expression Controller for 3D Avatar Animation working on a Smartphone (스마트폰기반 3D 아바타 애니메이션을 위한 다양한 얼굴표정 제어기 응용)

  • Choi, In-Ho;Lee, Sang-Hoon;Park, Sang-Il;Kim, Yong-Guk
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2012.06c
    • /
    • pp.323-325
    • /
    • 2012
  • 스마트폰 기반 3D 아바타를 이용하여 임의의 표정을 합성 및 제어하여 애니메이션 할 수 있는 방법과 응용을 제안한다. 사용될 아바타에 표현되는 임의의 표정 Data Set을 PCA로 처리 후, 인간의 가장 기본적인 6 표정으로 컨트롤러 축을 생성한다. 만들어진 제어기에, 임의의 연속 표정을 유저에 의해 정해진 시간에 생성하여 애니메이션 할 수 있는 방법을 시스템을 제안하다. 빠른 계산을 장점으로 하는 본 제어기는 스마트폰 환경에 맞게 탑재 되었으며, 이 제어기를 활용하여 모델워킹 모션에 다양한 표정을 적용할 수 있는 시스템을 구현하였다.

3-D Facial Animation on the PDA via Automatic Facial Expression Recognition (얼굴 표정의 자동 인식을 통한 PDA 상에서의 3차원 얼굴 애니메이션)

  • Lee Don-Soo;Choi Soo-Mi;Kim Hae-Hwang;Kim Yong-Guk
    • The KIPS Transactions:PartB
    • /
    • v.12B no.7 s.103
    • /
    • pp.795-802
    • /
    • 2005
  • In this paper, we present a facial expression recognition-synthesis system that recognizes 7 basic emotion information automatically and renders face with non-photorelistic style in PDA For the recognition of the facial expressions, first we need to detect the face area within the image acquired from the camera. Then, a normalization procedure is applied to it for geometrical and illumination corrections. To classify a facial expression, we have found that when Gabor wavelets is combined with enhanced Fisher model the best result comes out. In our case, the out put is the 7 emotional weighting. Such weighting information transmitted to the PDA via a mobile network, is used for non-photorealistic facial expression animation. To render a 3-D avatar which has unique facial character, we adopted the cartoon-like shading method. We found that facial expression animation using emotional curves is more effective in expressing the timing of an expression comparing to the linear interpolation method.

3D Figure Creation System Based on Content-Awareness for 3D Printing (3D 프린팅을 위한 콘텐츠 인지 기반 3D 개인 피규어 생성 시스템)

  • Lim, Seong-Jae;Hwang, Bon-Woo;Yoon, Seung-Uk;Jeon, Hye-Ryeong;Park, Chang-Joon;Choi, Jin-Sung
    • Journal of the Korea Computer Graphics Society
    • /
    • v.21 no.3
    • /
    • pp.11-16
    • /
    • 2015
  • We present a system for generating 3D personalized figures. This system provides 3D figures model modification and combination functions based on the content-awareness. The integrity of the 3D model must be guaranteed at the time of slicing of the 3D model for 3D printing. In addition to this, with 3D printing, we generally have to print a hollow model in order to save money, time, and the integrity of the print. This paper proposes the automatic algorithm that creates the 3D individual figures with depth sensor and the easy UI functions for deformation, thickness adjustment, and combination of the generated 3D figures model based on the content-awareness. Our proposed method maintains the unique features of the generated 3D figures and ensures the successful 3D printing.

A Deep Learning-Based Face Mesh Data Denoising System (딥 러닝 기반 얼굴 메쉬 데이터 디노이징 시스템)

  • Roh, Jihyun;Im, Hyeonseung;Kim, Jongmin
    • Journal of IKEEE
    • /
    • v.23 no.4
    • /
    • pp.1250-1256
    • /
    • 2019
  • Although one can easily generate real-world 3D mesh data using a 3D printer or a depth camera, the generated data inevitably includes unnecessary noise. Therefore, mesh denoising is essential to obtain intact 3D mesh data. However, conventional mathematical denoising methods require preprocessing and often eliminate some important features of the 3D mesh. To address this problem, this paper proposes a deep learning based 3D mesh denoising method. Specifically, we propose a convolution-based autoencoder model consisting of an encoder and a decoder. The convolution operation applied to the mesh data performs denoising considering the relationship between each vertex constituting the mesh data and the surrounding vertices. When the convolution is completed, a sampling operation is performed to improve the learning speed. Experimental results show that the proposed autoencoder model produces faster and higher quality denoised data than the conventional methods.

A Comic Facial Expression Method for Intelligent Avatar Communications in the Internet Cyberspace (인터넷 가상공간에서 지적 아바타 통신을 위한 코믹한 얼굴 표정의 생성법)

  • 이용후;김상운;청목유직
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.40 no.1
    • /
    • pp.59-73
    • /
    • 2003
  • As a means of overcoming the linguistic barrier between different languages in the Internet, a new sign-language communication system with CG animation techniques has been developed and proposed. In the system, the joint angles of the arms and the hands corresponding to the gesture as a non-verbal communication tool have been considered. The emotional expression, however, could as play also an important role in communicating each other. Especially, a comic expression is more efficient than real facial expression, and the movements of the cheeks and the jaws are more important AU's than those of the eyebrow, eye, mouth etc. Therefore, in this paper, we designed a 3D emotion editor using 2D model, and we extract AU's (called as PAU, here) which play a principal function in expressing emotions. We also proposed a method of generating the universal emotional expression with Avatar models which have different vertex structures. Here, we employed a method of dynamically adjusting the AU movements according to emotional intensities. The proposed system is implemented with Visual C++ and Open Inventor on windows platforms. Experimental results show a possibility that the system could be used as a non-verbal communication means to overcome the linguistic barrier.

Virtual Make-up System Using Light and Normal Map Approximation (조명 및 법선벡터 지도 추정을 이용한 사실적인 가상 화장 시스템)

  • Yang, Myung Hyun;Shin, Hyun Joon
    • Journal of the Korea Computer Graphics Society
    • /
    • v.21 no.3
    • /
    • pp.55-61
    • /
    • 2015
  • In this paper, we introduce a method to synthesize realistic make-up effects on input images efficiently. In particular, we focus on shading on the make-up effects due to the lighting and face curvature. By doing this, we can synthesize a wider range of effects realistically than the previous methods. To do this, the information about lighting information together with the normal vectors on all pixels over the face region in the input image. Since the previous methods that compute lighting information and normal vectors require relatively heavy computation cost, we introduce an approach to approximate lighting information using cascade pose regression process and normal vectors by transforming, rendering, and warping a standard 3D face model. The proposed method consumes much less computation time than the previous methods. In our experiment, we show the proposed approximation technique can produce naturally looking virtual make-up effects.

Development of a prototype simulator for dental education (치의학 교육을 위한 프로토타입 시뮬레이터의 개발)

  • Mi-El Kim;Jaehoon Sim;Aein Mon;Myung-Joo Kim;Young-Seok Park;Ho-Beom Kwon;Jaeheung Park
    • The Journal of Korean Academy of Prosthodontics
    • /
    • v.61 no.4
    • /
    • pp.257-267
    • /
    • 2023
  • Purpose. The purpose of the study was to fabricate a prototype robotic simulator for dental education, to test whether it could simulate mandibular movements, and to assess the possibility of the stimulator responding to stimuli during dental practice. Materials and methods. A virtual simulator model was developed based on segmentation of the hard tissues using cone-beam computed tomography (CBCT) data. The simulator frame was 3D printed using polylactic acid (PLA) material, and dentiforms and silicone face skin were also inserted. Servo actuators were used to control the movements of the simulator, and the simulator's response to dental stimuli was created by pressure and water level sensors. A water level test was performed to determine the specific threshold of the water level sensor. The mandibular movements and mandibular range of motion of the simulator were tested through computer simulation and the actual model. Results. The prototype robotic simulator consisted of an operational unit, an upper body with an electric device, a head with a temporomandibular joint (TMJ) and dentiforms. The TMJ of the simulator was capable of driving two degrees of freedom, implementing rotational and translational movements. In the water level test, the specific threshold of the water level sensor was 10.35 ml. The mandibular range of motion of the simulator was 50 mm in both computer simulation and the actual model. Conclusion. Although further advancements are still required to improve its efficiency and stability, the upper-body prototype simulator has the potential to be useful in dental practice education.