• Title/Summary/Keyword: 얼굴 합성

Search Result 140, Processing Time 0.019 seconds

Automatic Denoising of 2D Color Face Images Using Recursive PCA Reconstruction (2차원 칼라 얼굴 영상에서 반복적인 PCA 재구성을 이용한 자동적인 잡음 제거)

  • Park Hyun;Moon Young-Shik
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.43 no.2 s.308
    • /
    • pp.63-71
    • /
    • 2006
  • Denoising and reconstruction of color images are extensively studied in the field of computer vision and image processing. Especially, denoising and reconstruction of color face images are more difficult than those of natural images because of the structural characteristics of human faces as well as the subtleties of color interactions. In this paper, we propose a denoising method based on PCA reconstruction for removing complex color noise on human faces, which is not easy to remove by using vectorial color filters. The proposed method is composed of the following five steps: training of canonical eigenface space using PCA, automatic extraction of facial features using active appearance model, relishing of reconstructed color image using bilateral filter, extraction of noise regions using the variance of training data, and reconstruction using partial information of input images (except the noise regions) and blending of the reconstructed image with the original image. Experimental results show that the proposed denoising method maintains the structural characteristics of input faces, while efficiently removing complex color noise.

A Generation Methodology of Facial Expressions for Avatar Communications (아바타 통신에서의 얼굴 표정의 생성 방법)

  • Kim Jin-Yong;Yoo Jae-Hwi
    • Journal of the Korea Society of Computer and Information
    • /
    • v.10 no.3 s.35
    • /
    • pp.55-64
    • /
    • 2005
  • The avatar can be used as an auxiliary methodology of text and image communications in cyber space. An intelligent communication method can also be utilized to achieve real-time communication, where intelligently coded data (joint angles for arm gestures and action units for facial emotions) are transmitted instead of real or compressed pictures. In this paper. for supporting the action of arm and leg gestures, a method of generating the facial expressions that can represent sender's emotions is provided. The facial expression can be represented by Action Unit(AU), in this paper we suggest the methodology of finding appropriate AUs in avatar models that have various shape and structure. And, to maximize the efficiency of emotional expressions, a comic-style facial model having only eyebrows, eyes, nose, and mouth is employed. Then generation of facial emotion animation with the parameters is also investigated.

  • PDF

Face Tracking for Multi-view Display System (다시점 영상 시스템을 위한 얼굴 추적)

  • Han, Chung-Shin;Jang, Se-Hoon;Bae, Jin-Woo;Yoo, Ji-Sang
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.2C
    • /
    • pp.16-24
    • /
    • 2005
  • In this paper, we proposed a face tracking algorithm for a viewpoint adaptive multi-view synthesis system. The original scene captured by a depth camera contains a texture image and 8 bit gray-scale depth map. From this original image, multi-view images can be synthesized which correspond to viewer's position by using geometrical transformation such as a rotation and a translation. The proposed face tracking technique gives a motion parallax cue by different viewpoints and view angles. In the proposed algorithm, tracking of viewer's dominant face initially established from camera by using statistical characteristics of face colors and deformable templates is done. As a result, we can provide motion parallax cue by detecting viewer's dominant face area and tracking it even under a heterogeneous background and can successfully display the synthesized sequences.

Automatic Anticipation Generation for 3D Facial Animation (3차원 얼굴 표정 애니메이션을 위한 기대효과의 자동 생성)

  • Choi Jung-Ju;Kim Dong-Sun;Lee In-Kwon
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.32 no.1
    • /
    • pp.39-48
    • /
    • 2005
  • According to traditional 2D animation techniques, anticipation makes an animation much convincing and expressive. We present an automatic method for inserting anticipation effects to an existing facial animation. Our approach assumes that an anticipatory facial expression can be found within an existing facial animation if it is long enough. Vertices of the face model are classified into a set of components using principal components analysis directly from a given hey-framed and/or motion -captured facial animation data. The vortices in a single component will have similar directions of motion in the animation. For each component, the animation is examined to find an anticipation effect for the given facial expression. One of those anticipation effects is selected as the best anticipation effect, which preserves the topology of the face model. The best anticipation effect is automatically blended with the original facial animation while preserving the continuity and the entire duration of the animation. We show experimental results for given motion-captured and key-framed facial animations. This paper deals with a part of broad subject an application of the principles of traditional 2D animation techniques to 3D animation. We show how to incorporate anticipation into 3D facial animation. Animators can produce 3D facial animation with anticipation simply by selecting the facial expression in the animation.

Face Morphing Using Generative Adversarial Networks (Generative Adversarial Networks를 이용한 Face Morphing 기법 연구)

  • Han, Yoon;Kim, Hyoung Joong
    • Journal of Digital Contents Society
    • /
    • v.19 no.3
    • /
    • pp.435-443
    • /
    • 2018
  • Recently, with the explosive development of computing power, various methods such as RNN and CNN have been proposed under the name of Deep Learning, which solve many problems of Computer Vision have. The Generative Adversarial Network, released in 2014, showed that the problem of computer vision can be sufficiently solved in unsupervised learning, and the generation domain can also be studied using learned generators. GAN is being developed in various forms in combination with various models. Machine learning has difficulty in collecting data. If it is too large, it is difficult to refine the effective data set by removing the noise. If it is too small, the small difference becomes too big noise, and learning is not easy. In this paper, we apply a deep CNN model for extracting facial region in image frame to GAN model as a preprocessing filter, and propose a method to produce composite images of various facial expressions by stably learning with limited collection data of two persons.

Morphing and Warping using Delaunay Triangulation in Android Platform (안드로이드 플랫폼에서 들로네 삼각망을 이용한 모핑 및 와핑 기법)

  • Hwang, Ki-Tae
    • Journal of Korea Game Society
    • /
    • v.10 no.6
    • /
    • pp.137-146
    • /
    • 2010
  • According to rapid advent of the smartphone, software technologies which have been used on PCs are being introduced into the smartphone. This paper introduces an implementation of an application software under the Android platform using morphing and warping technology. It composes two face images with morphing technology and also transforms an original face image using warping technology for fun. For this, the image is triangulated by Delaunay Triangulation based on control points which are created by simple LCD touches on the mobile device. The implementation case of this paper will be a good reference for the development of applications like games using morphing and warping technologies over Android platforms.

Pose Transformation of a Frontal Face Image by Invertible Meshwarp Algorithm (역전가능 메쉬워프 알고리즘에 의한 정면 얼굴 영상의 포즈 변형)

  • 오승택;전병환
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.1_2
    • /
    • pp.153-163
    • /
    • 2003
  • In this paper, we propose a new technique of image based rendering(IBR) for the pose transformation of a face by using only a frontal face image and its mesh without a three-dimensional model. To substitute the 3D geometric model, first, we make up a standard mesh set of a certain person for several face sides ; front. left, right, half-left and half-right sides. For the given person, we compose only the frontal mesh of the frontal face image to be transformed. The other mesh is automatically generated based on the standard mesh set. And then, the frontal face image is geometrically transformed to give different view by using Invertible Meshwarp Algorithm, which is improved to tolerate the overlap or inversion of neighbor vertexes in the mesh. The same warping algorithm is used to generate the opening or closing effect of both eyes and a mouth. To evaluate the transformation performance, we capture dynamic images from 10 persons rotating their heads horizontally. And we measure the location error of 14 main features between the corresponding original and transformed facial images. That is, the average difference is calculated between the distances from the center of both eyes to each feature point for the corresponding original and transformed images. As a result, the average error in feature location is about 7.0% of the distance from the center of both eyes to the center of a mouth.

A design and implementation of Face Detection hardware (얼굴 검출을 위한 SoC 하드웨어 구현 및 검증)

  • Lee, Su-Hyun;Jeong, Yong-Jin
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.44 no.4
    • /
    • pp.43-54
    • /
    • 2007
  • This paper presents design and verification of a face detection hardware for real time application. Face detection algorithm detects rough face position based on already acquired feature parameter data. The hardware is composed of five main modules: Integral Image Calculator, Feature Coordinate Calculator, Feature Difference Calculator, Cascade Calculator, and Window Detection. It also includes on-chip Integral Image memory and Feature Parameter memory. The face detection hardware was verified by using S3C2440A CPU of Samsung Electronics, Virtex4LX100 FPGA of Xilinx, and a CCD Camera module. Our design uses 3,251 LUTs of Xilinx FPGA and takes about 1.96${\sim}$0.13 sec for face detection depending on sliding-window step size, when synthesized for Virtex4LX100 FPGA. When synthesized on Magnachip 0.25um ASIC library, it uses about 410,000 gates (Combinational area about 345,000 gates, Noncombinational area about 65,000 gates) and takes less than 0.5 sec for face realtime detection. This size and performance shows that it is adequate to use for embedded system applications. It has been fabricated as a real chip as a part of XF1201 chip and proven to work.

A Realtime Hardware Design for Face Detection (얼굴인식을 위한 실시간 하드웨어 설계)

  • Suh, Ki-Bum;Cha, Sun-Tae
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.17 no.2
    • /
    • pp.397-404
    • /
    • 2013
  • This paper propose the hardware architecture of face detection hardware system using the AdaBoost algorithm. The proposed structure of face detection hardware system is possible to work in 30frame per second and in real time. And the AdaBoost algorithm is adopted to learn and generate the characteristics of the face data by Matlab, and finally detected the face using this data. This paper describes the face detection hardware structure composed of image scaler, integral image extraction, face comparing, memory interface, data grouper and detected result display. The proposed circuit is so designed to process one point in one cycle that the prosed design can process full HD($1920{\times}1080$) image at 70MHz, which is approximate $2316087{\times}30$ cycle. Furthermore, This paper use the reducing the word length by Overflow to reduce memory size. and the proposed structure for face detection has been designed using Verilog HDL and modified in Mentor Graphics Modelsim. The proposed structure has been work on 45MHz operating frequency and use 74,757 LUT in FPGA Xilinx Virtex-5 XC5LX330.

Research and Development of Image Synthesis Model Based on Emotion for the Mobile Environment (모바일 환경에서 감성을 기반으로 한 영상 합성 기법 연구 및 개발)

  • Sim, SeungMin;Lee, JiYeon;Yoon, YongIk
    • Journal of the Korea Society of Computer and Information
    • /
    • v.18 no.11
    • /
    • pp.51-58
    • /
    • 2013
  • Camera performance of smartphone recently has been developed as much as the digital camera. Interest in applications As a result, many people take pictures and the number of people who are interested in application according to photos has been steadily increasing. However, there are only synthesis programs which are arraying some photos, overlapping multiple images. The model proposed in this paper, base on the emotion that is extracted from the facial expressions by combining the background and applying effects filters. And it can be also utilized in various fields more than any other synthesis programs.