• Title/Summary/Keyword: 얼굴 합성

Search Result 140, Processing Time 0.022 seconds

Robust Head Pose Estimation for Masked Face Image via Data Augmentation (데이터 증강을 통한 마스크 착용 얼굴 이미지에 강인한 얼굴 자세추정)

  • Kyeongtak, Han;Sungeun, Hong
    • Journal of Broadcast Engineering
    • /
    • v.27 no.6
    • /
    • pp.944-947
    • /
    • 2022
  • Due to the coronavirus pandemic, the wearing of a mask has been increasing worldwide; thus, the importance of image analysis on masked face images has become essential. Although head pose estimation can be applied to various face-related applications including driver attention, face frontalization, and gaze detection, few studies have been conducted to address the performance degradation caused by masked faces. This study proposes a new data augmentation that synthesizes the masked face, depending on the face image size and poses, which shows robust performance on BIWI benchmark dataset regardless of mask-wearing. Since the proposed scheme is not limited to the specific model, it can be utilized in various head pose estimation models.

Automatic Estimation of 2D Facial Muscle Parameter Using Neural Network (신경회로망을 이용한 2D 얼굴근육 파라메터의 자동인식)

  • 김동수;남기환;한준희;배철수;권오홍;나상동
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 1999.05a
    • /
    • pp.33-38
    • /
    • 1999
  • Muscle based face image synthesis is one of the most realistic approach to realize life-like agent in computer. Facial muscle model is composed of facial tissue elements and muscles. In this model, forces are calculated effecting facial tissue element by contraction of each muscle strength, so the combination of each muscle parameter decide a specific facial expression. Now each muscle parameter is decided on trial and error procedure comparing the sample photograph and generated image using our Muscle-Editor to generate a specific race image. In this paper, we propose the strategy of automatic estimation of facial muscle parameters from 2D marker movement using neural network. This also 3D motion estimation from 2D point or flow information in captered image under restriction of physics based fare model.

  • PDF

An Affective Space Model for the Faces of Korean Women in Twenties (한국인 20대 여성 얼굴의 감성모형)

  • 박수진;한재현;정찬섭
    • Science of Emotion and Sensibility
    • /
    • v.4 no.2
    • /
    • pp.47-55
    • /
    • 2001
  • In affective space model for the faces of Korean women in twenties was developed based on the findings of Park, et. at. (2001) that suggested two orthogonal dimensions for the affective representation of a face. babyish-mature and sharp-soft. In the current study, affective facial characteristics were visualized by providing properly synthesized faces at 17 subregions of the model space Effect of physical attributes of a face on its affective evaluation was also investigated along the two affective dimensions. The relationship between typical adjectives describing facial affectiveness anti physical attributes of a face was examined to provide a category-based interpretation.

  • PDF

Facial Age Classification and Synthesis using Feature Decomposition (특징 분해를 이용한 얼굴 나이 분류 및 합성)

  • Chanho Kim;In Kyu Park
    • Journal of Broadcast Engineering
    • /
    • v.28 no.2
    • /
    • pp.238-241
    • /
    • 2023
  • Recently deep learning models are widely used for various tasks such as facial recognition and face editing. Their training process often involves a dataset with imbalanced age distribution. It is because some age groups (teenagers and middle age) are more socially active and tends to have more data compared to the less socially active age groups (children and elderly). This imbalanced age distribution may negatively impact the deep learning training process or the model performance when tested against those age groups with less data. To this end, we propose an age-controllable face synthesis technique using a feature decomposition to classify age from facial images which can be utilized to synthesize novel data to balance out the age distribution. We perform extensive qualitative and quantitative evaluation on our proposed technique using the FFHQ dataset and we show that our method has better performance than existing method.

A Study on Facial Feature' Morphological Information Extraction and Classification for Avatar Generation (아바타 생성을 위한 이목구비 모양 특징정보 추출 및 분류에 관한 연구)

  • 박연출
    • Journal of the Korea Computer Industry Society
    • /
    • v.4 no.10
    • /
    • pp.631-642
    • /
    • 2003
  • We propose an approach to extract and to classify facial features into some classes from one's photo as prepared classification standards to generate one's avatar. Facial Feature Extraction and Classification was executed at eyes, nose, lips, jaw separately and I presented each facial features and classification standards. Extracted Facial Features are used for calculation to features of professional designer's facial component images. Then, most similar facial component images are mapped onto avatar's vector face.

  • PDF

Design and Implementation of Bimodal System using Face and Audio (얼굴과 음성 정보를 이용한 바이모달 시스템 설계 및 구현)

  • Kim, Myung-Hun;Lee, Chi-Geun;Jung, Sung-Tae
    • Annual Conference of KIPS
    • /
    • 2005.11a
    • /
    • pp.701-704
    • /
    • 2005
  • 최근 들어 바이모달 인식에 관한 연구가 활발히 진행되고 있다. 본 논문에서는 음성과 얼굴을 이용하여 바이모달 시스템을 구현하였다. 얼굴인식은 객체 분류 기법인 SVM을 이용하여 얼굴을 검출 및 인식하였으며, 음성인식은 HMM을 이용하여 음성인식을 하였다. 각기 인식된 결과에 대해 합성을 통하여 잡음에 의해 낮아지는 음성 인식률을 얼굴 인식과 같이 사용함으로서, 전체적인 인식률 향상을 볼 수 있다.

  • PDF

Reconstruction of High-Resolution Facial Image Based on Recursive Error Back-Projection of Top-Down Machine Learning (하향식 기계학습의 반복적 오차 역투영에 기반한 고해상도 얼굴 영상의 복원)

  • Park, Jeong-Seon;Lee, Seong-Whan
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.3
    • /
    • pp.266-274
    • /
    • 2007
  • This paper proposes a new reconstruction method of high-resolution facial image from a low-resolution facial image based on top-down machine learning and recursive error back-projection. A face is represented by a linear combination of prototypes of shape and that of texture. With the shape and texture information of each pixel in a given low-resolution facial image, we can estimate optimal coefficients for a linear combination of prototypes of shape and those that of texture by solving least square minimizations. Then high-resolution facial image can be obtained by using the optimal coefficients for linear combination of the high-resolution prototypes. In addition, a recursive error back-projection procedure is applied to improve the reconstruction accuracy of high-resolution facial image. The encouraging results of the proposed method show that our method can be used to improve the performance of the face recognition by applying our method to reconstruct high-resolution facial images from low-resolution images captured at a distance.

Facial Features and Motion Recovery using multi-modal information and Paraperspective Camera Model (다양한 형식의 얼굴정보와 준원근 카메라 모델해석을 이용한 얼굴 특징점 및 움직임 복원)

  • Kim, Sang-Hoon
    • The KIPS Transactions:PartB
    • /
    • v.9B no.5
    • /
    • pp.563-570
    • /
    • 2002
  • Robust extraction of 3D facial features and global motion information from 2D image sequence for the MPEG-4 SNHC face model encoding is described. The facial regions are detected from image sequence using multi-modal fusion technique that combines range, color and motion information. 23 facial features among the MPEG-4 FDP (Face Definition Parameters) are extracted automatically inside the facial region using color transform (GSCD, BWCD) and morphological processing. The extracted facial features are used to recover the 3D shape and global motion of the object using paraperspective camera model and SVD (Singular Value Decomposition) factorization method. A 3D synthetic object is designed and tested to show the performance of proposed algorithm. The recovered 3D motion information is transformed into global motion parameters of FAP (Face Animation Parameters) of the MPEG-4 to synchronize a generic face model with a real face.