• Title/Summary/Keyword: facial synthesis

Search Result 76, Processing Time 0.027 seconds

Synthesis of Facial Amphiphile 3,7-Diamino-5α-cholestane Derivatives as a Molecular Receptor

  • Ahmad, Md. Wasi;Jung, Young-Mee;Khan, Sharaf Nawaz;Kim, Hong-Seok
    • Bulletin of the Korean Chemical Society
    • /
    • v.30 no.9
    • /
    • pp.2101-2106
    • /
    • 2009
  • A series of facial amphiphiles 3,7-diaminocholestane were synthesized from 3,7-diketocholestane via 2 sequential reductive aminations and anion recognition was evaluated with acetate, chloride, bromide, fluoride and phosphate anions. The stereo-selective reductive amination protocol was utilized to synthesized facial amphiphiles afforded receptors in high yields. The molecular receptor 2 showed the highest binding constant with acetate in a 1:1 ratio.

Automatic Estimation of 2D Facial Muscle Parameter Using Neural Network (신경회로망을 이용한 2D 얼굴근육 파라메터의 자동인식)

  • 김동수;남기환;한준희;배철수;권오홍;나상동
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 1999.05a
    • /
    • pp.33-38
    • /
    • 1999
  • Muscle based face image synthesis is one of the most realistic approach to realize life-like agent in computer. Facial muscle model is composed of facial tissue elements and muscles. In this model, forces are calculated effecting facial tissue element by contraction of each muscle strength, so the combination of each muscle parameter decide a specific facial expression. Now each muscle parameter is decided on trial and error procedure comparing the sample photograph and generated image using our Muscle-Editor to generate a specific race image. In this paper, we propose the strategy of automatic estimation of facial muscle parameters from 2D marker movement using neural network. This also 3D motion estimation from 2D point or flow information in captered image under restriction of physics based fare model.

  • PDF

Individual 3D facial avatar synthesis using elastic matching of facial mesh and image (얼굴 메쉬와 이미지의 동적 매칭을 이용한 개인 아바타의 3차원 얼굴 합성)

  • 강명진;김창헌
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 1998.10c
    • /
    • pp.600-602
    • /
    • 1998
  • 본 논문은 정면과 측면 얼굴 이미지의 특성을 살린 3차원 개인 아바타 합성에 관한 연구이다. 표준 얼굴 메쉬를 얼굴 이미지의 특징점에 맞추려는 힘을 특징점 이외의 점들까지의 거리에 대한 가우스 분포를 따라 부드럽게 전달시켜 매쉬를 탄성있게 변형하는 힘으로 작용시켜 메쉬를 얼굴 이미지의 윤곽선을 중심으로 매칭시키고, 매칭된 메쉬가 매칭 이전의 메쉬의 기하학적 특성을 유지할 수 있도록 메쉬에 동적 피부 모델을 적용한다. 이렇게 생성한 3차원 메쉬에 이미지를 텍스춰 매핑하여 개인 특성을 살린 3차원 개인 아바타를 생성한다.

  • PDF

Facial Age Classification and Synthesis using Feature Decomposition (특징 분해를 이용한 얼굴 나이 분류 및 합성)

  • Chanho Kim;In Kyu Park
    • Journal of Broadcast Engineering
    • /
    • v.28 no.2
    • /
    • pp.238-241
    • /
    • 2023
  • Recently deep learning models are widely used for various tasks such as facial recognition and face editing. Their training process often involves a dataset with imbalanced age distribution. It is because some age groups (teenagers and middle age) are more socially active and tends to have more data compared to the less socially active age groups (children and elderly). This imbalanced age distribution may negatively impact the deep learning training process or the model performance when tested against those age groups with less data. To this end, we propose an age-controllable face synthesis technique using a feature decomposition to classify age from facial images which can be utilized to synthesize novel data to balance out the age distribution. We perform extensive qualitative and quantitative evaluation on our proposed technique using the FFHQ dataset and we show that our method has better performance than existing method.

3-D Facial Animation on the PDA via Automatic Facial Expression Recognition (얼굴 표정의 자동 인식을 통한 PDA 상에서의 3차원 얼굴 애니메이션)

  • Lee Don-Soo;Choi Soo-Mi;Kim Hae-Hwang;Kim Yong-Guk
    • The KIPS Transactions:PartB
    • /
    • v.12B no.7 s.103
    • /
    • pp.795-802
    • /
    • 2005
  • In this paper, we present a facial expression recognition-synthesis system that recognizes 7 basic emotion information automatically and renders face with non-photorelistic style in PDA For the recognition of the facial expressions, first we need to detect the face area within the image acquired from the camera. Then, a normalization procedure is applied to it for geometrical and illumination corrections. To classify a facial expression, we have found that when Gabor wavelets is combined with enhanced Fisher model the best result comes out. In our case, the out put is the 7 emotional weighting. Such weighting information transmitted to the PDA via a mobile network, is used for non-photorealistic facial expression animation. To render a 3-D avatar which has unique facial character, we adopted the cartoon-like shading method. We found that facial expression animation using emotional curves is more effective in expressing the timing of an expression comparing to the linear interpolation method.

Comparison Analysis of Four Face Swapping Models for Interactive Media Platform COX (인터랙티브 미디어 플랫폼 콕스에 제공될 4가지 얼굴 변형 기술의 비교분석)

  • Jeon, Ho-Beom;Ko, Hyun-kwan;Lee, Seon-Gyeong;Song, Bok-Deuk;Kim, Chae-Kyu;Kwon, Ki-Ryong
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.5
    • /
    • pp.535-546
    • /
    • 2019
  • Recently, there have been a lot of researches on the whole face replacement system, but it is not easy to obtain stable results due to various attitudes, angles and facial diversity. To produce a natural synthesis result when replacing the face shown in the video image, technologies such as face area detection, feature extraction, face alignment, face area segmentation, 3D attitude adjustment and facial transposition should all operate at a precise level. And each technology must be able to be interdependently combined. The results of our analysis show that the difficulty of implementing the technology and contribution to the system in facial replacement technology has increased in facial feature point extraction and facial alignment technology. On the other hand, the difficulty of the facial transposition technique and the three-dimensional posture adjustment technique were low, but showed the need for development. In this paper, we propose four facial replacement models such as 2-D Faceswap, OpenPose, Deekfake, and Cycle GAN, which are suitable for the Cox platform. These models have the following features; i.e. these models include a suitable model for front face pose image conversion, face pose image with active body movement, and face movement with right and left side by 15 degrees, Generative Adversarial Network.

Multi-attribute Face Editing using Facial Masks (얼굴 마스크 정보를 활용한 다중 속성 얼굴 편집)

  • Ambardi, Laudwika;Park, In Kyu;Hong, Sungeun
    • Journal of Broadcast Engineering
    • /
    • v.27 no.5
    • /
    • pp.619-628
    • /
    • 2022
  • Although face recognition and face generation have been growing in popularity, the privacy issues of using facial images in the wild have been a concurrent topic. In this paper, we propose a face editing network that can reduce privacy issues by generating face images with various properties from a small number of real face images and facial mask information. Unlike the existing methods of learning face attributes using a lot of real face images, the proposed method generates new facial images using a facial segmentation mask and texture images from five parts as styles. The images are then trained with our network to learn the styles and locations of each reference image. Once the proposed framework is trained, we can generate various face images using only a small number of real face images and segmentation information. In our extensive experiments, we show that the proposed method can not only generate new faces, but also localize facial attribute editing, despite using very few real face images.

Texture synthesis for model-based coding

  • Sohn, Young-Wook;Kim, In-Kwon;Park, Rae-Hong
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1996.06b
    • /
    • pp.23-28
    • /
    • 1996
  • Model-based coding is one of several approaches to very low bit rate image coding and it can be used in many applications such as image creation and virtual reality. However, its analysis and synthesis processes remain difficult, especially in the sense that the resulting synthesized image reveals some degradation in detailed facial components such as furrows around eyes and mouth. To solve the problem, a large number of methods have been proposed and the texture update method is one of them. In this paper, we investigate texture synthesis for model-based coding. In the update process of the proposed texture synthesis algorithm, texture information is stored in a memory and the decoder reuses it. With this method, the transmission bit rate for texture data can be reduced compared with the conventional method updating texture periodically.

  • PDF