• Title/Summary/Keyword: Face Inpainting

Search Result 3, Processing Time 0.019 seconds

Face inpainting via Learnable Structure Knowledge of Fusion Network

  • Yang, You;Liu, Sixun;Xing, Bin;Li, Kesen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.3
    • /
    • pp.877-893
    • /
    • 2022
  • With the development of deep learning, face inpainting has been significantly enhanced in the past few years. Although image inpainting framework integrated with generative adversarial network or attention mechanism enhanced the semantic understanding among facial components, the issues of reconstruction on corrupted regions are still worthy to explore, such as blurred edge structure, excessive smoothness, unreasonable semantic understanding and visual artifacts, etc. To address these issues, we propose a Learnable Structure Knowledge of Fusion Network (LSK-FNet), which learns a prior knowledge by edge generation network for image inpainting. The architecture involves two steps: Firstly, structure information obtained by edge generation network is used as the prior knowledge for face inpainting network. Secondly, both the generated prior knowledge and the incomplete image are fed into the face inpainting network together to get the fusion information. To improve the accuracy of inpainting, both of gated convolution and region normalization are applied in our proposed model. We evaluate our LSK-FNet qualitatively and quantitatively on the CelebA-HQ dataset. The experimental results demonstrate that the edge structure and details of facial images can be improved by using LSK-FNet. Our model surpasses the compared models on L1, PSNR and SSIM metrics. When the masked region is less than 20%, L1 loss reduce by more than 4.3%.

ISFRNet: A Deep Three-stage Identity and Structure Feature Refinement Network for Facial Image Inpainting

  • Yan Wang;Jitae Shin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.3
    • /
    • pp.881-895
    • /
    • 2023
  • Modern image inpainting techniques based on deep learning have achieved remarkable performance, and more and more people are working on repairing more complex and larger missing areas, although this is still challenging, especially for facial image inpainting. For a face image with a huge missing area, there are very few valid pixels available; however, people have an ability to imagine the complete picture in their mind according to their subjective will. It is important to simulate this capability while maintaining the identity features of the face as much as possible. To achieve this goal, we propose a three-stage network model, which we refer to as the identity and structure feature refinement network (ISFRNet). ISFRNet is based on 1) a pre-trained pSp-styleGAN model that generates an extremely realistic face image with rich structural features; 2) a shallow structured network with a small receptive field; and 3) a modified U-net with two encoders and a decoder, which has a large receptive field. We choose structural similarity index (SSIM), peak signal-to-noise ratio (PSNR), L1 Loss and learned perceptual image patch similarity (LPIPS) to evaluate our model. When the missing region is 20%-40%, the above four metric scores of our model are 28.12, 0.942, 0.015 and 0.090, respectively. When the lost area is between 40% and 60%, the metric scores are 23.31, 0.840, 0.053 and 0.177, respectively. Our inpainting network not only guarantees excellent face identity feature recovery but also exhibits state-of-the-art performance compared to other multi-stage refinement models.

Intermediate View Image and its Digital Hologram Generation for an Virtual Arbitrary View-Point Hologram Service (임의의 가상시점 홀로그램 서비스를 위한 중간시점 영상 및 디지털 홀로그램 생성)

  • Seo, Young-Ho;Lee, Yoon-Hyuk;Koo, Ja-Myung;Kim, Dong-Wook
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.17 no.1
    • /
    • pp.15-31
    • /
    • 2013
  • This paper proposes an intermediate image generation method for the viewer's view point by tracking the viewer's face, which is converted to a digital hologram. Its purpose is to increase the viewing angle of a digital hologram, which is gathering higher and higher interest these days. The method assumes that the image information for the leftmost and the rightmost view points within the viewing angle to be controlled are given. It uses a stereo-matching method between the leftmost and the rightmost depth images to obtain the pseudo-disparity increment per depth value. With this increment, the positional informations from both the leftmost view point and the rightmost view point are generated, which are blended to get the information at the wanted intermediate viewpoint. The occurrable dis-occlusion region in this case is defined and a inpainting method is proposed. The results from implementing and experimenting this method showed that the average image qualities of the generated depth and RGB image were 33.83[dB] and 29.5[dB], respectively, and the average execution time was 250[ms] per frame. Also, we propose a prototype system to service digital hologram interactively to the viewer by using the proposed intermediate view generation method. It includes the operations of data acquisition for the leftmost and the rightmost viewpoints, camera calibration and image rectification, intermediate view image generation, computer-generated hologram (CGH) generation, and reconstruction of the hologram image. This system is implemented in the LabView(R) environments, in which CGH generation and hologram image reconstruction are implemented with GPGPUs, while others are implemented in software. The implemented system showed the execution speed to process about 5 frames per second.