• Title/Summary/Keyword: Stage-GAN

Search Result 39, Processing Time 0.026 seconds

Stage-GAN with Semantic Maps for Large-scale Image Super-resolution

  • Wei, Zhensong;Bai, Huihui;Zhao, Yao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.8
    • /
    • pp.3942-3961
    • /
    • 2019
  • Recently, the models of deep super-resolution networks can successfully learn the non-linear mapping from the low-resolution inputs to high-resolution outputs. However, for large scaling factors, this approach has difficulties in learning the relation of low-resolution to high-resolution images, which lead to the poor restoration. In this paper, we propose Stage Generative Adversarial Networks (Stage-GAN) with semantic maps for image super-resolution (SR) in large scaling factors. We decompose the task of image super-resolution into a novel semantic map based reconstruction and refinement process. In the initial stage, the semantic maps based on the given low-resolution images can be generated by Stage-0 GAN. In the next stage, the generated semantic maps from Stage-0 and corresponding low-resolution images can be used to yield high-resolution images by Stage-1 GAN. In order to remove the reconstruction artifacts and blurs for high-resolution images, Stage-2 GAN based post-processing module is proposed in the last stage, which can reconstruct high-resolution images with photo-realistic details. Extensive experiments and comparisons with other SR methods demonstrate that our proposed method can restore photo-realistic images with visual improvements. For scale factor ${\times}8$, our method performs favorably against other methods in terms of gradients similarity.

Deep Learning-based Single Image Generative Adversarial Network: Performance Comparison and Trends (딥러닝 기반 단일 이미지 생성적 적대 신경망 기법 비교 분석)

  • Jeong, Seong-Hun;Kong, Kyeongbo
    • Journal of Broadcast Engineering
    • /
    • v.27 no.3
    • /
    • pp.437-450
    • /
    • 2022
  • Generative adversarial networks(GANs) have demonstrated remarkable success in image synthesis. However, since GANs show instability in the training stage on large datasets, it is difficult to apply to various application fields. A single image GAN is a field that generates various images by learning the internal distribution of a single image. In this paper, we investigate five Single Image GAN: SinGAN, ConSinGAN, InGAN, DeepSIM, and One-Shot GAN. We compare the performance of each model and analyze the pros and cons of a single image GAN.

Document Image Binarization by GAN with Unpaired Data Training

  • Dang, Quang-Vinh;Lee, Guee-Sang
    • International Journal of Contents
    • /
    • v.16 no.2
    • /
    • pp.8-18
    • /
    • 2020
  • Data is critical in deep learning but the scarcity of data often occurs in research, especially in the preparation of the paired training data. In this paper, document image binarization with unpaired data is studied by introducing adversarial learning, excluding the need for supervised or labeled datasets. However, the simple extension of the previous unpaired training to binarization inevitably leads to poor performance compared to paired data training. Thus, a new deep learning approach is proposed by introducing a multi-diversity of higher quality generated images. In this paper, a two-stage model is proposed that comprises the generative adversarial network (GAN) followed by the U-net network. In the first stage, the GAN uses the unpaired image data to create paired image data. With the second stage, the generated paired image data are passed through the U-net network for binarization. Thus, the trained U-net becomes the binarization model during the testing. The proposed model has been evaluated over the publicly available DIBCO dataset and it outperforms other techniques on unpaired training data. The paper shows the potential of using unpaired data for binarization, for the first time in the literature, which can be further improved to replace paired data training for binarization in the future.

FD-StackGAN: Face De-occlusion Using Stacked Generative Adversarial Networks

  • Jabbar, Abdul;Li, Xi;Iqbal, M. Munawwar;Malik, Arif Jamal
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.7
    • /
    • pp.2547-2567
    • /
    • 2021
  • It has been widely acknowledged that occlusion impairments adversely distress many face recognition algorithms' performance. Therefore, it is crucial to solving the problem of face image occlusion in face recognition. To solve the image occlusion problem in face recognition, this paper aims to automatically de-occlude the human face majority or discriminative regions to improve face recognition performance. To achieve this, we decompose the generative process into two key stages and employ a separate generative adversarial network (GAN)-based network in both stages. The first stage generates an initial coarse face image without an occlusion mask. The second stage refines the result from the first stage by forcing it closer to real face images or ground truth. To increase the performance and minimize the artifacts in the generated result, a new refine loss (e.g., reconstruction loss, perceptual loss, and adversarial loss) is used to determine all differences between the generated de-occluded face image and ground truth. Furthermore, we build occluded face images and corresponding occlusion-free face images dataset. We trained our model on this new dataset and later tested it on real-world face images. The experiment results (qualitative and quantitative) and the comparative study confirm the robustness and effectiveness of the proposed work in removing challenging occlusion masks with various structures, sizes, shapes, types, and positions.

A novel therapeutic approach of Hachimi-jio-gan to diabetes and its complications

  • Yokozawa, Takako;Yamabe, Noriko;Cho, Eun-Ju
    • Advances in Traditional Medicine
    • /
    • v.5 no.2
    • /
    • pp.75-91
    • /
    • 2005
  • Great efforts have been made to improve both the quality of life and life expectancy of diabetes by treating problems associated with chronic complications such as neuropathy, retinopathy and nephropathy. In particular, diabetes is an increased risk of developing several types of kidney disease, and the predominant cause of end-stage renal disease in patients with this disorder is diabetic nephropathy. Therefore, prevention of the occurrence and progression of diabetes and its complications has become a very important issue. The scientific observations of an animal model of streptozotocin-induced diabetes, spontaneously occurring diabetes and diabetic nephropathy in this study suggest that one of the Kampo prescriptions, Hachimi-jio-gan comprising eight constituents, is a novel therapeutic agent.

Research Trends of Generative Adversarial Networks and Image Generation and Translation (GAN 적대적 생성 신경망과 이미지 생성 및 변환 기술 동향)

  • Jo, Y.J.;Bae, K.M.;Park, J.Y.
    • Electronics and Telecommunications Trends
    • /
    • v.35 no.4
    • /
    • pp.91-102
    • /
    • 2020
  • Recently, generative adversarial networks (GANs) is a field of research that has rapidly emerged wherein many studies conducted shows overwhelming results. Initially, this was at the level of imitating the training dataset. However, the GAN is currently useful in many fields, such as transformation of data categories, restoration of erased parts of images, copying facial expressions of humans, and creation of artworks depicting a dead painter's style. Although many outstanding research achievements have been attracting attention recently, GANs have encountered many challenges. First, they require a large memory facility for research. Second, there are still technical limitations in processing high-resolution images over 4K. Third, many GAN learning methods have a problem of instability in the training stage. However, recent research results show images that are difficult to distinguish whether they are real or fake, even with the naked eye, and the resolution of 4K and above is being developed. With the increase in image quality and resolution, many applications in the field of design and image and video editing are now available, including those that draw a photorealistic image as a simple sketch or easily modify unnecessary parts of an image or a video. In this paper, we discuss how GANs started, including the base architecture and latest technologies of GANs used in high-resolution, high-quality image creation, image and video editing, style translation, content transfer, and technology.

Image Augmentation of Paralichthys Olivaceus Disease Using SinGAN Deep Learning Model (SinGAN 딥러닝 모델을 이용한 넙치 질병 이미지 증강)

  • Son, Hyun Seung;Choi, Han Suk
    • The Journal of the Korea Contents Association
    • /
    • v.21 no.12
    • /
    • pp.322-330
    • /
    • 2021
  • In modern aquaculture, mass mortality is a very important issue that determines the success of aquaculture business. If a fish disease is not detected at an early stage in the farm, the disease spreads quickly because the farm is a closed environment. Therefore, early detection of diseases is crucial to prevent mass mortality of fish raised in farms. Recently deep learning-based automatic identification of fish diseases has been widely used, but there are many difficulties in identifying objects due to insufficient images of fish diseases. Therefore, this paper suggests a method to generate a large number of fish disease images by synthesizing normal images and disease images using SinGAN deep learning model in order to to solve the lack of fish disease images. We generate images from the three most frequently occurring Paralichthys Olivaceus diseases such as Scuticociliatida, Vibriosis, and Lymphocytosis and compare them with the original image. In this study, a total of 330 sheets of scutica disease, 110 sheets of vibrioemia, and 110 sheets of limphosis were made by synthesizing 10 disease patterns with 11 normal halibut images, and 1,320 images were produced by quadrupling the images.

Combining Conditional Generative Adversarial Network and Regression-based Calibration for Cloud Removal of Optical Imagery (광학 영상의 구름 제거를 위한 조건부 생성적 적대 신경망과 회귀 기반 보정의 결합)

  • Kwak, Geun-Ho;Park, Soyeon;Park, No-Wook
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1357-1369
    • /
    • 2022
  • Cloud removal is an essential image processing step for any task requiring time-series optical images, such as vegetation monitoring and change detection. This paper presents a two-stage cloud removal method that combines conditional generative adversarial networks (cGANs) with regression-based calibration to construct a cloud-free time-series optical image set. In the first stage, the cGANs generate initial prediction results using quantitative relationships between optical and synthetic aperture radar images. In the second stage, the relationships between the predicted results and the actual values in non-cloud areas are first quantified via random forest-based regression modeling and then used to calibrate the cGAN-based prediction results. The potential of the proposed method was evaluated from a cloud removal experiment using Sentinel-2 and COSMO-SkyMed images in the rice field cultivation area of Gimje. The cGAN model could effectively predict the reflectance values in the cloud-contaminated rice fields where severe changes in physical surface conditions happened. Moreover, the regression-based calibration in the second stage could improve the prediction accuracy, compared with a regression-based cloud removal method using a supplementary image that is temporally distant from the target image. These experimental results indicate that the proposed method can be effectively applied to restore cloud-contaminated areas when cloud-free optical images are unavailable for environmental monitoring.

ISFRNet: A Deep Three-stage Identity and Structure Feature Refinement Network for Facial Image Inpainting

  • Yan Wang;Jitae Shin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.3
    • /
    • pp.881-895
    • /
    • 2023
  • Modern image inpainting techniques based on deep learning have achieved remarkable performance, and more and more people are working on repairing more complex and larger missing areas, although this is still challenging, especially for facial image inpainting. For a face image with a huge missing area, there are very few valid pixels available; however, people have an ability to imagine the complete picture in their mind according to their subjective will. It is important to simulate this capability while maintaining the identity features of the face as much as possible. To achieve this goal, we propose a three-stage network model, which we refer to as the identity and structure feature refinement network (ISFRNet). ISFRNet is based on 1) a pre-trained pSp-styleGAN model that generates an extremely realistic face image with rich structural features; 2) a shallow structured network with a small receptive field; and 3) a modified U-net with two encoders and a decoder, which has a large receptive field. We choose structural similarity index (SSIM), peak signal-to-noise ratio (PSNR), L1 Loss and learned perceptual image patch similarity (LPIPS) to evaluate our model. When the missing region is 20%-40%, the above four metric scores of our model are 28.12, 0.942, 0.015 and 0.090, respectively. When the lost area is between 40% and 60%, the metric scores are 23.31, 0.840, 0.053 and 0.177, respectively. Our inpainting network not only guarantees excellent face identity feature recovery but also exhibits state-of-the-art performance compared to other multi-stage refinement models.

A Study on Architectural Image Generation using Artificial Intelligence Algorithm - A Fundamental Study on the Generation of Due Diligence Images Based on Architectural Sketch - (인공지능 알고리즘을 활용한 건축 이미지 생성에 관한 연구 - 건축 스케치 기반의 실사 이미지 생성을 위한 기초적 연구 -)

  • Han, Sang-Kook;Shin, Dong-Youn
    • Journal of KIBIM
    • /
    • v.11 no.2
    • /
    • pp.54-59
    • /
    • 2021
  • In the process of designing a building, the process of expressing the designer's ideas through images is essential. However, it is expensive and time consuming for a designer to analyze every individual case image to generate a hypothetical design. This study aims to visualize the basic design draft sketch made by the designer as a real image using the Generative Adversarial Network (GAN) based on the continuously accumulated architectural case images. Through this, we proposed a method to build an automated visualization environment using artificial intelligence and to visualize the architectural idea conceived by the designer in the architectural planning stage faster and cheaper than in the past. This study was conducted using approximately 20,000 images. In our study, the GAN algorithm allowed us to represent primary materials and shades within 2 seconds, but lacked accuracy in material and shading representation. We plan to add image data in the future to address this in a follow-up study.