• Title/Summary/Keyword: adversarial training

Search Result 105, Processing Time 0.029 seconds

Synthetic Data Augmentation for Plant Disease Image Generation using GAN (GAN을 이용한 식물 병해 이미지 합성 데이터 증강)

  • Nazki, Haseeb;Lee, Jaehwan;Yoon, Sook;Park, Dong Sun
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2018.05a
    • /
    • pp.459-460
    • /
    • 2018
  • In this paper, we present a data augmentation method that generates synthetic plant disease images using Generative Adversarial Networks (GANs). We propose a training scheme that first uses classical data augmentation techniques to enlarge the training set and then further enlarges the data size and its diversity by applying GAN techniques for synthetic data augmentation. Our method is demonstrated on a limited dataset of 2789 images of tomato plant diseases (Gray mold, Canker, Leaf mold, Plague, Leaf miner, Whitefly etc.).

  • PDF

Eyeglass Remover Network based on a Synthetic Image Dataset

  • Kang, Shinjin;Hahn, Teasung
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.4
    • /
    • pp.1486-1501
    • /
    • 2021
  • The removal of accessories from the face is one of the essential pre-processing stages in the field of face recognition. However, despite its importance, a robust solution has not yet been provided. This paper proposes a network and dataset construction methodology to remove only the glasses from facial images effectively. To obtain an image with the glasses removed from an image with glasses by the supervised learning method, a network that converts them and a set of paired data for training is required. To this end, we created a large number of synthetic images of glasses being worn using facial attribute transformation networks. We adopted the conditional GAN (cGAN) frameworks for training. The trained network converts the in-the-wild face image with glasses into an image without glasses and operates stably even in situations wherein the faces are of diverse races and ages and having different styles of glasses.

Comparison of Seismic Data Interpolation Performance using U-Net and cWGAN (U-Net과 cWGAN을 이용한 탄성파 탐사 자료 보간 성능 평가)

  • Yu, Jiyun;Yoon, Daeung
    • Geophysics and Geophysical Exploration
    • /
    • v.25 no.3
    • /
    • pp.140-161
    • /
    • 2022
  • Seismic data with missing traces are often obtained regularly or irregularly due to environmental and economic constraints in their acquisition. Accordingly, seismic data interpolation is an essential step in seismic data processing. Recently, research activity on machine learning-based seismic data interpolation has been flourishing. In particular, convolutional neural network (CNN) and generative adversarial network (GAN), which are widely used algorithms for super-resolution problem solving in the image processing field, are also used for seismic data interpolation. In this study, CNN-based algorithm, U-Net and GAN-based algorithm, and conditional Wasserstein GAN (cWGAN) were used as seismic data interpolation methods. The results and performances of the methods were evaluated thoroughly to find an optimal interpolation method, which reconstructs with high accuracy missing seismic data. The work process for model training and performance evaluation was divided into two cases (i.e., Cases I and II). In Case I, we trained the model using only the regularly sampled data with 50% missing traces. We evaluated the model performance by applying the trained model to a total of six different test datasets, which consisted of a combination of regular, irregular, and sampling ratios. In Case II, six different models were generated using the training datasets sampled in the same way as the six test datasets. The models were applied to the same test datasets used in Case I to compare the results. We found that cWGAN showed better prediction performance than U-Net with higher PSNR and SSIM. However, cWGAN generated additional noise to the prediction results; thus, an ensemble technique was performed to remove the noise and improve the accuracy. The cWGAN ensemble model removed successfully the noise and showed improved PSNR and SSIM compared with existing individual models.

Enhancing CT Image Quality Using Conditional Generative Adversarial Networks for Applying Post-mortem Computed Tomography in Forensic Pathology: A Phantom Study (사후전산화단층촬영의 법의병리학 분야 활용을 위한 조건부 적대적 생성 신경망을 이용한 CT 영상의 해상도 개선: 팬텀 연구)

  • Yebin Yoon;Jinhaeng Heo;Yeji Kim;Hyejin Jo;Yongsu Yoon
    • Journal of radiological science and technology
    • /
    • v.46 no.4
    • /
    • pp.315-323
    • /
    • 2023
  • Post-mortem computed tomography (PMCT) is commonly employed in the field of forensic pathology. PMCT was mainly performed using a whole-body scan with a wide field of view (FOV), which lead to a decrease in spatial resolution due to the increased pixel size. This study aims to evaluate the potential for developing a super-resolution model based on conditional generative adversarial networks (CGAN) to enhance the image quality of CT. 1761 low-resolution images were obtained using a whole-body scan with a wide FOV of the head phantom, and 341 high-resolution images were obtained using the appropriate FOV for the head phantom. Of the 150 paired images in the total dataset, which were divided into training set (96 paired images) and validation set (54 paired images). Data augmentation was perform to improve the effectiveness of training by implementing rotations and flips. To evaluate the performance of the proposed model, we used the Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index Measure (SSIM) and Deep Image Structure and Texture Similarity (DISTS). Obtained the PSNR, SSIM, and DISTS values of the entire image and the Medial orbital wall, the zygomatic arch, and the temporal bone, where fractures often occur during head trauma. The proposed method demonstrated improvements in values of PSNR by 13.14%, SSIM by 13.10% and DISTS by 45.45% when compared to low-resolution images. The image quality of the three areas where fractures commonly occur during head trauma has also improved compared to low-resolution images.

Deep survey using deep learning: generative adversarial network

  • Park, Youngjun;Choi, Yun-Young;Moon, Yong-Jae;Park, Eunsu;Lim, Beomdu;Kim, Taeyoung
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.44 no.2
    • /
    • pp.78.1-78.1
    • /
    • 2019
  • There are a huge number of faint objects that have not been observed due to the lack of large and deep surveys. In this study, we demonstrate that a deep learning approach can produce a better quality deep image from a single pass imaging so that could be an alternative of conventional image stacking technique or the expensive large and deep surveys. Using data from the Sloan Digital Sky Survey (SDSS) stripe 82 which provide repeatedly scanned imaging data, a training data set is constructed: g-, r-, and i-band images of single pass data as an input and r-band co-added image as a target. Out of 151 SDSS fields that have been repeatedly scanned 34 times, 120 fields were used for training and 31 fields for validation. The size of a frame selected for the training is 1k by 1k pixel scale. To avoid possible problems caused by the small number of training sets, frames are randomly selected within that field each iteration of training. Every 5000 iterations of training, the performance were evaluated with RMSE, peak signal-to-noise ratio which is given on logarithmic scale, structural symmetry index (SSIM) and difference in SSIM. We continued the training until a GAN model with the best performance is found. We apply the best GAN-model to NGC0941 located in SDSS stripe 82. By comparing the radial surface brightness and photometry error of images, we found the possibility that this technique could generate a deep image with statistics close to the stacked image from a single-pass image.

  • PDF

The Effect of Training Patch Size and ConvNeXt application on the Accuracy of CycleGAN-based Satellite Image Simulation (학습패치 크기와 ConvNeXt 적용이 CycleGAN 기반 위성영상 모의 정확도에 미치는 영향)

  • Won, Taeyeon;Jo, Su Min;Eo, Yang Dam
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.40 no.3
    • /
    • pp.177-185
    • /
    • 2022
  • A method of restoring the occluded area was proposed by referring to images taken with the same types of sensors on high-resolution optical satellite images through deep learning. For the natural continuity of the simulated image with the occlusion region and the surrounding image while maintaining the pixel distribution of the original image as much as possible in the patch segmentation image, CycleGAN (Cycle Generative Adversarial Network) method with ConvNeXt block applied was used to analyze three experimental regions. In addition, We compared the experimental results of a training patch size of 512*512 pixels and a 1024*1024 pixel size that was doubled. As a result of experimenting with three regions with different characteristics,the ConvNeXt CycleGAN methodology showed an improved R2 value compared to the existing CycleGAN-applied image and histogram matching image. For the experiment by patch size used for training, an R2 value of about 0.98 was generated for a patch of 1024*1024 pixels. Furthermore, As a result of comparing the pixel distribution for each image band, the simulation result trained with a large patch size showed a more similar histogram distribution to the original image. Therefore, by using ConvNeXt CycleGAN, which is more advanced than the image applied with the existing CycleGAN method and the histogram-matching image, it is possible to derive simulation results similar to the original image and perform a successful simulation.

Comparison of CNN and GAN-based Deep Learning Models for Ground Roll Suppression (그라운드-롤 제거를 위한 CNN과 GAN 기반 딥러닝 모델 비교 분석)

  • Sangin Cho;Sukjoon Pyun
    • Geophysics and Geophysical Exploration
    • /
    • v.26 no.2
    • /
    • pp.37-51
    • /
    • 2023
  • The ground roll is the most common coherent noise in land seismic data and has an amplitude much larger than the reflection event we usually want to obtain. Therefore, ground roll suppression is a crucial step in seismic data processing. Several techniques, such as f-k filtering and curvelet transform, have been developed to suppress the ground roll. However, the existing methods still require improvements in suppression performance and efficiency. Various studies on the suppression of ground roll in seismic data have recently been conducted using deep learning methods developed for image processing. In this paper, we introduce three models (DnCNN (De-noiseCNN), pix2pix, and CycleGAN), based on convolutional neural network (CNN) or conditional generative adversarial network (cGAN), for ground roll suppression and explain them in detail through numerical examples. Common shot gathers from the same field were divided into training and test datasets to compare the algorithms. We trained the models using the training data and evaluated their performances using the test data. When training these models with field data, ground roll removed data are required; therefore, the ground roll is suppressed by f-k filtering and used as the ground-truth data. To evaluate the performance of the deep learning models and compare the training results, we utilized quantitative indicators such as the correlation coefficient and structural similarity index measure (SSIM) based on the similarity to the ground-truth data. The DnCNN model exhibited the best performance, and we confirmed that other models could also be applied to suppress the ground roll.

CNN-Based Fake Image Identification with Improved Generalization (일반화 능력이 향상된 CNN 기반 위조 영상 식별)

  • Lee, Jeonghan;Park, Hanhoon
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.12
    • /
    • pp.1624-1631
    • /
    • 2021
  • With the continued development of image processing technology, we live in a time when it is difficult to visually discriminate processed (or tampered) images from real images. However, as the risk of fake images being misused for crime increases, the importance of image forensic science for identifying fake images is emerging. Currently, various deep learning-based identifiers have been studied, but there are still many problems to be used in real situations. Due to the inherent characteristics of deep learning that strongly relies on given training data, it is very vulnerable to evaluating data that has never been viewed. Therefore, we try to find a way to improve generalization ability of deep learning-based fake image identifiers. First, images with various contents were added to the training dataset to resolve the over-fitting problem that the identifier can only classify real and fake images with specific contents but fails for those with other contents. Next, color spaces other than RGB were exploited. That is, fake image identification was attempted on color spaces not considered when creating fake images, such as HSV and YCbCr. Finally, dropout, which is commonly used for generalization of neural networks, was used. Through experimental results, it has been confirmed that the color space conversion to HSV is the best solution and its combination with the approach of increasing the training dataset significantly can greatly improve the accuracy and generalization ability of deep learning-based identifiers in identifying fake images that have never been seen before.

Development of an Actor-Critic Deep Reinforcement Learning Platform for Robotic Grasping in Real World (현실 세계에서의 로봇 파지 작업을 위한 정책/가치 심층 강화학습 플랫폼 개발)

  • Kim, Taewon;Park, Yeseong;Kim, Jong Bok;Park, Youngbin;Suh, Il Hong
    • The Journal of Korea Robotics Society
    • /
    • v.15 no.2
    • /
    • pp.197-204
    • /
    • 2020
  • In this paper, we present a learning platform for robotic grasping in real world, in which actor-critic deep reinforcement learning is employed to directly learn the grasping skill from raw image pixels and rarely observed rewards. This is a challenging task because existing algorithms based on deep reinforcement learning require an extensive number of training data or massive computational cost so that they cannot be affordable in real world settings. To address this problems, the proposed learning platform basically consists of two training phases; a learning phase in simulator and subsequent learning in real world. Here, main processing blocks in the platform are extraction of latent vector based on state representation learning and disentanglement of a raw image, generation of adapted synthetic image using generative adversarial networks, and object detection and arm segmentation for the disentanglement. We demonstrate the effectiveness of this approach in a real environment.

A Study on the Image Preprosessing model linkage method for usability of Pix2Pix (Pix2Pix의 활용성을 위한 학습이미지 전처리 모델연계방안 연구)

  • Kim, Hyo-Kwan;Hwang, Won-Yong
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.15 no.5
    • /
    • pp.380-386
    • /
    • 2022
  • This paper proposes a method for structuring the preprocessing process of a training image when color is applied using Pix2Pix, one of the adversarial generative neural network techniques. This paper concentrate on the prediction result can be damaged according to the degree of light reflection of the training image. Therefore, image preprocesisng and parameters for model optimization were configured before model application. In order to increase the image resolution of training and prediction results, it is necessary to modify the of the model so this part is designed to be tuned with parameters. In addition, in this paper, the logic that processes only the part where the prediction result is damaged by light reflection is configured together, and the pre-processing logic that does not distort the prediction result is also configured.Therefore, in order to improve the usability, the accuracy was improved through experiments on the part that applies the light reflection tuning filter to the training image of the Pix2Pix model and the parameter configuration.