• Title/Summary/Keyword: 3D Face Model

Search Result 275, Processing Time 0.028 seconds

The Suggestion for Clinical Trial of Face Rejuvenation using Korean Medicine's Embedded Needle (Maesun) Based on Literature Review (매선을 활용한 한의 안면 성형 임상 연구 설계 제안 -한의 안면 성형 임상연구 동향 분석을 바탕으로-)

  • Lee, Jae-Chul;Lim, Chang-Gyu;Kim, Jung-Won;Park, Sun-Hee;Yoon, Jeong-Ho
    • The Journal of Korean Medicine Ophthalmology and Otolaryngology and Dermatology
    • /
    • v.26 no.2
    • /
    • pp.78-87
    • /
    • 2013
  • Objectives : This work aimed to review clinical trial trend of Korean medicine's face rejuvenation and suggest future trial using embedded needle(Maesun) based on Evidence-based medicine's PICO Model. Methods : 46 papers were searched from Oasis and DBPia, then 8 papers were engaged in review of clinical trial trend. Based on PICO model, clinical trial's patient, intervention, and outcome measurement were suggested. Results : Evidence level of clinical trials is relatively low, because their study designs are almost case report or case series. No study have comparison groups. Outcome measurement is varied, however, 3D face scanner were used to measure before-after changes of face. Based on review, we suggested that necessity of intervention standardization, measuring of normal control group and 2D/3D combined outcome measurement of face. Conclusions : There are many demands for revealing efficacy and safety of Korean medicine's intervention, also for face rejuvenation using embedded needle. For meeting the level of demands, more rigorous works are needed.

A study on an efficient prediction of welding deformation for T-joint laser welding of sandwich panel Part II : Proposal of a method to use shell element model

  • Kim, Jae Woong;Jang, Beom Seon;Kang, Sung Wook
    • International Journal of Naval Architecture and Ocean Engineering
    • /
    • v.6 no.2
    • /
    • pp.245-256
    • /
    • 2014
  • I-core sandwich panel that has been used more widely is assembled using high power $CO_2$ laser welding. Kim et al. (2013) proposed a circular cone type heat source model for the T-joint laser welding between face plate and core. It can cover the negative defocus which is commonly adopted in T-joint laser welding to provide deeper penetration. In part I, a volumetric heat source model is proposed and it is verified thorough a comparison of melting zone on the cross section with experiment results. The proposed model can be used for heat transfer analysis and thermal elasto-plastic analysis to predict welding deformation that occurs during laser welding. In terms of computational time, since the thermal elasto-plastic analysis using 3D solid elements is quite time consuming, shell element model with multi-layers have been employed instead. However, the conventional layered approach is not appropriate for the application of heat load at T-Joint. This paper, Part II, suggests a new method to arrange different number of layers for face plate and core in order to impose heat load only to the face plate.

A Study on the Feature Point Extraction and Image Synthesis in the 3-D Model Based Image Transmission System (3차원 모델 기반 영상전송 시스템에서의 특징점 추출과 영상합성 연구)

  • 배문관;김동호;정성환;김남철;배건성
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.17 no.7
    • /
    • pp.767-778
    • /
    • 1992
  • Is discussed. A method to extract feature points and to synthesize human facial images In 3-Dmodel-based ceding system, faciai feature points are extracted automatically using some image processing techniques and the known knowledge for human face. A wire frame model matched to human face Is transformed according to the motion of point using the extracted feature points. The synthesized Image Is produced by mapping the texture of initial front view Image onto the trarnsformed wire frame. Experinent results show that the synthesitzed image appears with little unnaturalness.

  • PDF

Face Replacement under Different Illumination Condition (다른 조명 환경을 갖는 영상 간의 얼굴 교체 기술)

  • Song, Joongseok;Zhang, Xingjie;Park, Jong-Il
    • Journal of Broadcast Engineering
    • /
    • v.20 no.4
    • /
    • pp.606-618
    • /
    • 2015
  • Computer graphics(CG) is being important technique in media contents such as movie and TV. Especially, face replacement technique which replaces the faces between different images have been studied as a typical technology of CG by academia and researchers for a long time. In this paper, we propose the face replacement method between target and reference images under different illumination environment without 3D model. In experiments, we verified that the proposed method could naturally replace the faces between reference and target images under different illumination condition.

Robust AAM-based Face Tracking with Occlusion Using SIFT Features (SIFT 특징을 이용하여 중첩상황에 강인한 AAM 기반 얼굴 추적)

  • Eom, Sung-Eun;Jang, Jun-Su
    • The KIPS Transactions:PartB
    • /
    • v.17B no.5
    • /
    • pp.355-362
    • /
    • 2010
  • Face tracking is to estimate the motion of a non-rigid face together with a rigid head in 3D, and plays important roles in higher levels such as face/facial expression/emotion recognition. In this paper, we propose an AAM-based face tracking algorithm. AAM has been widely used to segment and track deformable objects, but there are still many difficulties. Particularly, it often tends to diverge or converge into local minima when a target object is self-occluded, partially or completely occluded. To address this problem, we utilize the scale invariant feature transform (SIFT). SIFT is an effective method for self and partial occlusion because it is able to find correspondence between feature points under partial loss. And it enables an AAM to continue to track without re-initialization in complete occlusions thanks to the good performance of global matching. We also register and use the SIFT features extracted from multi-view face images during tracking to effectively track a face across large pose changes. Our proposed algorithm is validated by comparing other algorithms under the above 3 kinds of occlusions.

Automatic Generation of 3D Face Model from Trinocular Images (Trinocular 영상을 이용한 3D 얼굴 모델 자동 생성)

  • Yi, Kwang-Do;Ahn, Sang-Chul;Kwon, Yong-Moo;Ko, Han-Seok;Kim, Hyoung-Gon
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.36S no.7
    • /
    • pp.104-115
    • /
    • 1999
  • This paper proposes an efficient method for 3D modeling of a human face from trinocular images by reconstructing face surface using range data. By using a trinocular camera system, we mitigated the tradeoff between the occlusion problem and the range resolution limitation which is the critical limitation in binocular camera system. We also propose an MPC_MBS (Matching Pixel Count Multiple Baseline Stereo) area-based matching method to reduce boundary overreach phenomenon and to improve both of accuracy and precision in matching. In this method, the computing time can be reduced significantly by removing the redundancies. In the model generation sub-pixel accurate surface data are achieved by 2D interpolation of disparity values, and are sampled to make regular triangular meshes. The data size of the triangular mesh model can be controlled by merging the vertices that lie on the same plane within user defined error threshold.

  • PDF

Hair Segmentation using Optimized Fully Connected Network and 3D Hair Style

  • Kim, Junghyun;Lee, Yunhwan;Chin, Seongah
    • International Journal of Advanced Culture Technology
    • /
    • v.9 no.4
    • /
    • pp.385-391
    • /
    • 2021
  • 3D modeling of the human body is an integral part of computer graphics. Among them, several studies have been conducted on hair modeling, but there are generally few studies that effectively implement hair and face modeling simultaneously. This study has the originality of providing users with customized face modeling and hair modeling that is different from previous studies. For realistic hair styling, We design and realize hair segmentation using FCN, and we select the most appropriate model through comparing PSPNet, DeepLab V3+, and MobileNet. In this study, we use the open dataset named Figaro1k. Through the analysis of iteration and epoch parameters, we reach the optimized values of them. In addition, we experiment external parameters about the location of the camera, the color of the lighting, and the presence or absence of accessories. And the environmental analysis factors of the avatar maker were set and solutions to problems derived during the analysis process were presented.

Depth Image Restoration Using Generative Adversarial Network (Generative Adversarial Network를 이용한 손실된 깊이 영상 복원)

  • Nah, John Junyeop;Sim, Chang Hun;Park, In Kyu
    • Journal of Broadcast Engineering
    • /
    • v.23 no.5
    • /
    • pp.614-621
    • /
    • 2018
  • This paper proposes a method of restoring corrupted depth image captured by depth camera through unsupervised learning using generative adversarial network (GAN). The proposed method generates restored face depth images using 3D morphable model convolutional neural network (3DMM CNN) with large-scale CelebFaces Attribute (CelebA) and FaceWarehouse dataset for training deep convolutional generative adversarial network (DCGAN). The generator and discriminator equip with Wasserstein distance for loss function by utilizing minimax game. Then the DCGAN restore the loss of captured facial depth images by performing another learning procedure using trained generator and new loss function.

Realistic individual 3D face modeling (사실적인 3D 얼굴 모델링 시스템)

  • Kim, Sang-Hoon
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.8 no.8
    • /
    • pp.1187-1193
    • /
    • 2013
  • In this paper, we present realistic 3D head modeling and facial expression systems. For 3D head modeling, we perform generic model fitting to make individual head shape and texture mapping. To calculate the deformation function in the generic model fitting, we determine correspondence between individual heads and the generic model. Then, we reconstruct the feature points to 3D with simultaneously captured images from calibrated stereo camera. For texture mapping, we project the fitted generic model to image and map the texture in the predefined triangle mesh to generic model. To prevent extracting the wrong texture, we propose a simple method using a modified interpolation function. For generating 3D facial expression, we use the vector muscle based algorithm. For more realistic facial expression, we add the deformation of the skin according to the jaw rotation to basic vector muscle model and apply mass spring model. Finally, several 3D facial expression results are shown at the end of the paper.

Robust Watermarking Algorithm for 3D Mesh Models (3차원 메쉬 모델을 위한 강인한 워터마킹 기법)

  • 송한새;조남익;김종원
    • Journal of Broadcast Engineering
    • /
    • v.9 no.1
    • /
    • pp.64-73
    • /
    • 2004
  • A robust watermarking algorithm is proposed for 3D mesh models. Watermark is inserted into the 2D image which is extracted from the target 3D model. Each Pixel value of the extracted 2D image represents a distance from the predefined reference points to the face of the given 3D model. This extracted image is defined as “range image” in this paper. Watermark is embedded into the range image. Then, watermarked 3D mesh is obtained by modifying vertices using the watermarked range Image. In extraction procedure, the original model is needed. After registration between the original and the watermarked models, two range images are extracted from each 3D model. From these images. embedded watermark is extracted. Experimental results show that the proposed algorithm is robust against the attacks such as rotation, translation, uniform scaling, mesh simplification, AWGN and quantization of vertex coordinates.