• Title/Summary/Keyword: CycleGAN

Search Result 53, Processing Time 0.031 seconds

The Method for Colorizing SAR Images of Kompsat-5 Using Cycle GAN with Multi-scale Discriminators (다양한 크기의 식별자를 적용한 Cycle GAN을 이용한 다목적실용위성 5호 SAR 영상 색상 구현 방법)

  • Ku, Wonhoe;Chun, Daewon
    • Korean Journal of Remote Sensing
    • /
    • v.34 no.6_3
    • /
    • pp.1415-1425
    • /
    • 2018
  • Kompsat-5 is the first Earth Observation Satellite which is equipped with an SAR in Korea. SAR images are generated by receiving signals reflected from an object by microwaves emitted from a SAR antenna. Because the wavelengths of microwaves are longer than the size of particles in the atmosphere, it can penetrate clouds and fog, and high-resolution images can be obtained without distinction between day and night. However, there is no color information in SAR images. To overcome these limitations of SAR images, colorization of SAR images using Cycle GAN, a deep learning model developed for domain translation, was conducted. Training of Cycle GAN is unstable due to the unsupervised learning based on unpaired dataset. Therefore, we proposed MS Cycle GAN applying multi-scale discriminator to solve the training instability of Cycle GAN and to improve the performance of colorization in this paper. To compare colorization performance of MS Cycle GAN and Cycle GAN, generated images by both models were compared qualitatively and quantitatively. Training Cycle GAN with multi-scale discriminator shows the losses of generators and discriminators are significantly reduced compared to the conventional Cycle GAN, and we identified that generated images by MS Cycle GAN are well-matched with the characteristics of regions such as leaves, rivers, and land.

Improved CycleGAN for underwater ship engine audio translation (수중 선박엔진 음향 변환을 위한 향상된 CycleGAN 알고리즘)

  • Ashraf, Hina;Jeong, Yoon-Sang;Lee, Chong Hyun
    • The Journal of the Acoustical Society of Korea
    • /
    • v.39 no.4
    • /
    • pp.292-302
    • /
    • 2020
  • Machine learning algorithms have made immense contributions in various fields including sonar and radar applications. Recently developed Cycle-Consistency Generative Adversarial Network (CycleGAN), a variant of GAN has been successfully used for unpaired image-to-image translation. We present a modified CycleGAN for translation of underwater ship engine sounds with high perceptual quality. The proposed network is composed of an improved generator model trained to translate underwater audio from one vessel type to other, an improved discriminator to identify the data as real or fake and a modified cycle-consistency loss function. The quantitative and qualitative analysis of the proposed CycleGAN are performed on publicly available underwater dataset ShipsEar by evaluating and comparing Mel-cepstral distortion, pitch contour matching, nearest neighbor comparison and mean opinion score with existing algorithms. The analysis results of the proposed network demonstrate the effectiveness of the proposed network.

Many-to-many voice conversion experiments using a Korean speech corpus (다수 화자 한국어 음성 변환 실험)

  • Yook, Dongsuk;Seo, HyungJin;Ko, Bonggu;Yoo, In-Chul
    • The Journal of the Acoustical Society of Korea
    • /
    • v.41 no.3
    • /
    • pp.351-358
    • /
    • 2022
  • Recently, Generative Adversarial Networks (GAN) and Variational AutoEncoders (VAE) have been applied to voice conversion that can make use of non-parallel training data. Especially, Conditional Cycle-Consistent Generative Adversarial Networks (CC-GAN) and Cycle-Consistent Variational AutoEncoders (CycleVAE) show promising results in many-to-many voice conversion among multiple speakers. However, the number of speakers has been relatively small in the conventional voice conversion studies using the CC-GANs and the CycleVAEs. In this paper, we extend the number of speakers to 100, and analyze the performances of the many-to-many voice conversion methods experimentally. It has been found through the experiments that the CC-GAN shows 4.5 % less Mel-Cepstral Distortion (MCD) for a small number of speakers, whereas the CycleVAE shows 12.7 % less MCD in a limited training time for a large number of speakers.

The Effect of Training Patch Size and ConvNeXt application on the Accuracy of CycleGAN-based Satellite Image Simulation (학습패치 크기와 ConvNeXt 적용이 CycleGAN 기반 위성영상 모의 정확도에 미치는 영향)

  • Won, Taeyeon;Jo, Su Min;Eo, Yang Dam
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.40 no.3
    • /
    • pp.177-185
    • /
    • 2022
  • A method of restoring the occluded area was proposed by referring to images taken with the same types of sensors on high-resolution optical satellite images through deep learning. For the natural continuity of the simulated image with the occlusion region and the surrounding image while maintaining the pixel distribution of the original image as much as possible in the patch segmentation image, CycleGAN (Cycle Generative Adversarial Network) method with ConvNeXt block applied was used to analyze three experimental regions. In addition, We compared the experimental results of a training patch size of 512*512 pixels and a 1024*1024 pixel size that was doubled. As a result of experimenting with three regions with different characteristics,the ConvNeXt CycleGAN methodology showed an improved R2 value compared to the existing CycleGAN-applied image and histogram matching image. For the experiment by patch size used for training, an R2 value of about 0.98 was generated for a patch of 1024*1024 pixels. Furthermore, As a result of comparing the pixel distribution for each image band, the simulation result trained with a large patch size showed a more similar histogram distribution to the original image. Therefore, by using ConvNeXt CycleGAN, which is more advanced than the image applied with the existing CycleGAN method and the histogram-matching image, it is possible to derive simulation results similar to the original image and perform a successful simulation.

An Experiment on Image Restoration Applying the Cycle Generative Adversarial Network to Partial Occlusion Kompsat-3A Image

  • Won, Taeyeon;Eo, Yang Dam
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.1
    • /
    • pp.33-43
    • /
    • 2022
  • This study presents a method to restore an optical satellite image with distortion and occlusion due to fog, haze, and clouds to one that minimizes degradation factors by referring to the same type of peripheral image. Specifically, the time and cost of re-photographing were reduced by partially occluding a region. To maintain the original image's pixel value as much as possible and to maintain restored and unrestored area continuity, a simulation restoration technique modified with the Cycle Generative Adversarial Network (CycleGAN) method was developed. The accuracy of the simulated image was analyzed by comparing CycleGAN and histogram matching, as well as the pixel value distribution, with the original image. The results show that for Site 1 (out of three sites), the root mean square error and R2 of CycleGAN were 169.36 and 0.9917, respectively, showing lower errors than those for histogram matching (170.43 and 0.9896, respectively). Further, comparison of the mean and standard deviation values of images simulated by CycleGAN and histogram matching with the ground truth pixel values confirmed the CycleGAN methodology as being closer to the ground truth value. Even for the histogram distribution of the simulated images, CycleGAN was closer to the ground truth than histogram matching.

Improved Cycle GAN Performance By Considering Semantic Loss (의미적 손실 함수를 통한 Cycle GAN 성능 개선)

  • Tae-Young Jeong;Hyun-Sik Lee;Ye-Rim Eom;Kyung-Su Park;Yu-Rim Shin;Jae-Hyun Moon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.908-909
    • /
    • 2023
  • Recently, several generative models have emerged and are being used in various industries. Among them, Cycle GAN is still used in various fields such as style transfer, medical care and autonomous driving. In this paper, we propose two methods to improve the performance of these Cycle GAN model. The ReLU activation function previously used in the generator was changed to Leaky ReLU. And a new loss function is proposed that considers the semantic level rather than focusing only on the pixel level through the VGG feature extractor. The proposed model showed quality improvement on the test set in the art domain, and it can be expected to be applied to other domains in the future to improve performance.

A study of interior style transformation with GAN model (GAN을 활용한 인테리어 스타일 변환 모델에 관한 연구)

  • Choi, Jun-Hyeck;Lee, Jae-Seung
    • Journal of KIBIM
    • /
    • v.12 no.1
    • /
    • pp.55-61
    • /
    • 2022
  • Recently, demand for designing own space is increasing as the rapid growth of home furnishing market. However, there is a limitation that it is not easy to compare the style between before construction view and after view. This study aims to translate real image into another style with GAN model learned with interior images. To implement this, first we established style criteria and collected modern, natural, and classic style images, and experimented with ResNet, UNet, Gradient penalty concept to CycleGAN algorithm. As a result of training, model recognize common indoor image elements, such as floor, wall, and furniture, and suitable color, material was converted according to interior style. On the other hand, the form of furniture, ornaments, and detailed pattern expressions are difficult to be recognized by CycleGAN model, and the accuracy lacked. Although UNet converted images more radically than ResNet, it was more stained. The GAN algorithm allowed us to represent results within 2 seconds. Through this, it is possible to quickly and easily visualize and compare the front and after the interior space style to be constructed. Furthermore, this GAN will be available to use in the design rendering include interior.

Multi Cycle Consistent Adversarial Networks for Multi Attribute Image to Image Translation

  • Jo, Seok Hee;Cho, Kyu Cheol
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.9
    • /
    • pp.63-69
    • /
    • 2020
  • Image-image translation is a technology that creates a target image through input images, and has recently shown high performance in creating a more realistic image by utilizing GAN, which is a non-map learning structure. Therefore, there are various studies on image-to-image translation using GAN. At this point, most image-to-image translations basically target one attribute translation. But the data used and obtainable in real life consist of a variety of features that are hard to explain with one feature. Therefore, if you aim to change multiple attributes that can divide the image creation process by attributes to take advantage of the various attributes, you will be able to play a better role in image-to-image translation. In this paper, we propose Multi CycleGAN, a dual attribute transformation structure, by utilizing CycleGAN, which showed high performance among image-image translation structures using GAN. This structure implements a dual transformation structure in which three domains conduct two-way learning to learn about the two properties of an input domain. Experiments have shown that images through the new structure maintain the properties of the input area and show high performance with the target properties applied. Using this structure, it is possible to create more diverse images in the future, so we can expect to utilize image generation in more diverse areas.

A Cycle GAN-based Wallpaper Image Transformation Method for Interior Simulation (Cycle GAN 기반 벽지 인테리어 이미지 변환 기법)

  • Seong-Hoon Kim;Yo-Han Kim;Sun-Yong Kim
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.18 no.2
    • /
    • pp.349-354
    • /
    • 2023
  • As the population interested in interior design has been increasing, the global interior market has grown significantly. Global interior companies are developing and providing simulation services for various interior elements. Although wallpaper design is the most important interior element, existing wallpaper design simulation services are difficult to use due to drawbacks such as differences between expected and actual results, long simulation time, and the need for professional skills. We proposed a wallpaper image transformation method for interior design using cycle generative adversarial networks (GAN). The proposed method demonstrates that users can simulate wallpaper design within a short period of time based on interior image data using various types of wallpaper.

Comparison of GAN Deep Learning Methods for Underwater Optical Image Enhancement

  • Kim, Hong-Gi;Seo, Jung-Min;Kim, Soo Mee
    • Journal of Ocean Engineering and Technology
    • /
    • v.36 no.1
    • /
    • pp.32-40
    • /
    • 2022
  • Underwater optical images face various limitations that degrade the image quality compared with optical images taken in our atmosphere. Attenuation according to the wavelength of light and reflection by very small floating objects cause low contrast, blurry clarity, and color degradation in underwater images. We constructed an image data of the Korean sea and enhanced it by learning the characteristics of underwater images using the deep learning techniques of CycleGAN (cycle-consistent adversarial network), UGAN (underwater GAN), FUnIE-GAN (fast underwater image enhancement GAN). In addition, the underwater optical image was enhanced using the image processing technique of Image Fusion. For a quantitative performance comparison, UIQM (underwater image quality measure), which evaluates the performance of the enhancement in terms of colorfulness, sharpness, and contrast, and UCIQE (underwater color image quality evaluation), which evaluates the performance in terms of chroma, luminance, and saturation were calculated. For 100 underwater images taken in Korean seas, the average UIQMs of CycleGAN, UGAN, and FUnIE-GAN were 3.91, 3.42, and 2.66, respectively, and the average UCIQEs were measured to be 29.9, 26.77, and 22.88, respectively. The average UIQM and UCIQE of Image Fusion were 3.63 and 23.59, respectively. CycleGAN and UGAN qualitatively and quantitatively improved the image quality in various underwater environments, and FUnIE-GAN had performance differences depending on the underwater environment. Image Fusion showed good performance in terms of color correction and sharpness enhancement. It is expected that this method can be used for monitoring underwater works and the autonomous operation of unmanned vehicles by improving the visibility of underwater situations more accurately.