• Title/Summary/Keyword: generative learning

Search Result 285, Processing Time 0.021 seconds

Comparison of CNN and GAN-based Deep Learning Models for Ground Roll Suppression (그라운드-롤 제거를 위한 CNN과 GAN 기반 딥러닝 모델 비교 분석)

  • Sangin Cho;Sukjoon Pyun
    • Geophysics and Geophysical Exploration
    • /
    • v.26 no.2
    • /
    • pp.37-51
    • /
    • 2023
  • The ground roll is the most common coherent noise in land seismic data and has an amplitude much larger than the reflection event we usually want to obtain. Therefore, ground roll suppression is a crucial step in seismic data processing. Several techniques, such as f-k filtering and curvelet transform, have been developed to suppress the ground roll. However, the existing methods still require improvements in suppression performance and efficiency. Various studies on the suppression of ground roll in seismic data have recently been conducted using deep learning methods developed for image processing. In this paper, we introduce three models (DnCNN (De-noiseCNN), pix2pix, and CycleGAN), based on convolutional neural network (CNN) or conditional generative adversarial network (cGAN), for ground roll suppression and explain them in detail through numerical examples. Common shot gathers from the same field were divided into training and test datasets to compare the algorithms. We trained the models using the training data and evaluated their performances using the test data. When training these models with field data, ground roll removed data are required; therefore, the ground roll is suppressed by f-k filtering and used as the ground-truth data. To evaluate the performance of the deep learning models and compare the training results, we utilized quantitative indicators such as the correlation coefficient and structural similarity index measure (SSIM) based on the similarity to the ground-truth data. The DnCNN model exhibited the best performance, and we confirmed that other models could also be applied to suppress the ground roll.

GAN-based shadow removal using context information

  • Yoon, Hee-jin;Kim, Kang-jik;Chun, Jun-chul
    • Journal of Internet Computing and Services
    • /
    • v.20 no.6
    • /
    • pp.29-36
    • /
    • 2019
  • When dealing with outdoor images in a variety of computer vision applications, the presence of shadow degrades performance. In order to understand the information occluded by shadow, it is essential to remove the shadow. To solve this problem, in many studies, involves a two-step process of shadow detection and removal. However, the field of shadow detection based on CNN has greatly improved, but the field of shadow removal has been difficult because it needs to be restored after removing the shadow. In this paper, it is assumed that shadow is detected, and shadow-less image is generated by using original image and shadow mask. In previous methods, based on CGAN, the image created by the generator was learned from only the aspect of the image patch in the adversarial learning through the discriminator. In the contrast, we propose a novel method using a discriminator that judges both the whole image and the local patch at the same time. We not only use the residual generator to produce high quality images, but we also use joint loss, which combines reconstruction loss and GAN loss for training stability. To evaluate our approach, we used an ISTD datasets consisting of a single image. The images generated by our approach show sharp and restored detailed information compared to previous methods.

Application of Deep Learning to Solar Data: 3. Generation of Solar images from Galileo sunspot drawings

  • Lee, Harim;Moon, Yong-Jae;Park, Eunsu;Jeong, Hyunjin;Kim, Taeyoung;Shin, Gyungin
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.44 no.1
    • /
    • pp.81.2-81.2
    • /
    • 2019
  • We develop an image-to-image translation model, which is a popular deep learning method based on conditional Generative Adversarial Networks (cGANs), to generate solar magnetograms and EUV images from sunspot drawings. For this, we train the model using pairs of sunspot drawings from Mount Wilson Observatory (MWO) and their corresponding SDO/HMI magnetograms and SDO/AIA EUV images (512 by 512) from January 2012 to September 2014. We test the model by comparing pairs of actual SDO images (magnetogram and EUV images) and the corresponding AI-generated ones from October to December in 2014. Our results show that bipolar structures and coronal loop structures of AI-generated images are consistent with those of the original ones. We find that their unsigned magnetic fluxes well correlate with those of the original ones with a good correlation coefficient of 0.86. We also obtain pixel-to-pixel correlations EUV images and AI-generated ones. The average correlations of 92 test samples for several SDO lines are very good: 0.88 for AIA 211, 0.87 for AIA 1600 and 0.93 for AIA 1700. These facts imply that AI-generated EUV images quite similar to AIA ones. Applying this model to the Galileo sunspot drawings in 1612, we generate HMI-like magnetograms and AIA-like EUV images of the sunspots. This application will be used to generate solar images using historical sunspot drawings.

  • PDF

Application of Deep Learning to Solar Data: 1. Overview

  • Moon, Yong-Jae;Park, Eunsu;Kim, Taeyoung;Lee, Harim;Shin, Gyungin;Kim, Kimoon;Shin, Seulki;Yi, Kangwoo
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.44 no.1
    • /
    • pp.51.2-51.2
    • /
    • 2019
  • Multi-wavelength observations become very popular in astronomy. Even though there are some correlations among different sensor images, it is not easy to translate from one to the other one. In this study, we apply a deep learning method for image-to-image translation, based on conditional generative adversarial networks (cGANs), to solar images. To examine the validity of the method for scientific data, we consider several different types of pairs: (1) Generation of SDO/EUV images from SDO/HMI magnetograms, (2) Generation of backside magnetograms from STEREO/EUVI images, (3) Generation of EUV & X-ray images from Carrington sunspot drawing, and (4) Generation of solar magnetograms from Ca II images. It is very impressive that AI-generated ones are quite consistent with actual ones. In addition, we apply the convolution neural network to the forecast of solar flares and find that our method is better than the conventional method. Our study also shows that the forecast of solar proton flux profiles using Long and Short Term Memory method is better than the autoregressive method. We will discuss several applications of these methodologies for scientific research.

  • PDF

Application of Deep Learning to Solar Data: 2. Generation of Solar UV & EUV images from magnetograms

  • Park, Eunsu;Moon, Yong-Jae;Lee, Harim;Lim, Daye
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.44 no.1
    • /
    • pp.81.3-81.3
    • /
    • 2019
  • In this study, we apply conditional Generative Adversarial Network, which is one of the deep learning method, to the image-to-image translation from solar magentograms to solar UV and EUV images. For this, we train a model using pairs of SDO/AIA 9 wavelength UV and EUV images and their corresponding SDO/HMI line-of-sight magnetograms from 2011 to 2017 except August and September each year. We evaluate the model by comparing pairs of SDO/AIA images and corresponding generated ones in August and September. Our results from this study are as follows. First, we successfully generate SDO/AIA like solar UV and EUV images from SDO/HMI magnetograms. Second, our model has pixel-to-pixel correlation coefficients (CC) higher than 0.8 except 171. Third, our model slightly underestimates the pixel values in the view of Relative Error (RE), but the values are quite small. Fourth, considering CC and RE together, 1600 and 1700 photospheric UV line images, which have quite similar structures to the corresponding magnetogram, have the best results compared to other lines. This methodology can be applicable to many scientific fields that use several different filter images.

  • PDF

Feature Analysis for Detecting Mobile Application Review Generated by AI-Based Language Model

  • Lee, Seung-Cheol;Jang, Yonghun;Park, Chang-Hyeon;Seo, Yeong-Seok
    • Journal of Information Processing Systems
    • /
    • v.18 no.5
    • /
    • pp.650-664
    • /
    • 2022
  • Mobile applications can be easily downloaded and installed via markets. However, malware and malicious applications containing unwanted advertisements exist in these application markets. Therefore, smartphone users install applications with reference to the application review to avoid such malicious applications. An application review typically comprises contents for evaluation; however, a false review with a specific purpose can be included. Such false reviews are known as fake reviews, and they can be generated using artificial intelligence (AI)-based text-generating models. Recently, AI-based text-generating models have been developed rapidly and demonstrate high-quality generated texts. Herein, we analyze the features of fake reviews generated from Generative Pre-Training-2 (GPT-2), an AI-based text-generating model and create a model to detect those fake reviews. First, we collect a real human-written application review from Kaggle. Subsequently, we identify features of the fake review using natural language processing and statistical analysis. Next, we generate fake review detection models using five types of machine-learning models trained using identified features. In terms of the performances of the fake review detection models, we achieved average F1-scores of 0.738, 0.723, and 0.730 for the fake review, real review, and overall classifications, respectively.

Generation of Super-Resolution Benchmark Dataset for Compact Advanced Satellite 500 Imagery and Proof of Concept Results

  • Yonghyun Kim;Jisang Park;Daesub Yoon
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.4
    • /
    • pp.459-466
    • /
    • 2023
  • In the last decade, artificial intelligence's dramatic advancement with the development of various deep learning techniques has significantly contributed to remote sensing fields and satellite image applications. Among many prominent areas, super-resolution research has seen substantial growth with the release of several benchmark datasets and the rise of generative adversarial network-based studies. However, most previously published remote sensing benchmark datasets represent spatial resolution within approximately 10 meters, imposing limitations when directly applying for super-resolution of small objects with cm unit spatial resolution. Furthermore, if the dataset lacks a global spatial distribution and is specialized in particular land covers, the consequent lack of feature diversity can directly impact the quantitative performance and prevent the formation of robust foundation models. To overcome these issues, this paper proposes a method to generate benchmark datasets by simulating the modulation transfer functions of the sensor. The proposed approach leverages the simulation method with a solid theoretical foundation, notably recognized in image fusion. Additionally, the generated benchmark dataset is applied to state-of-the-art super-resolution base models for quantitative and visual analysis and discusses the shortcomings of the existing datasets. Through these efforts, we anticipate that the proposed benchmark dataset will facilitate various super-resolution research shortly in Korea.

Using artificial intelligence to detect human errors in nuclear power plants: A case in operation and maintenance

  • Ezgi Gursel ;Bhavya Reddy ;Anahita Khojandi;Mahboubeh Madadi;Jamie Baalis Coble;Vivek Agarwal ;Vaibhav Yadav;Ronald L. Boring
    • Nuclear Engineering and Technology
    • /
    • v.55 no.2
    • /
    • pp.603-622
    • /
    • 2023
  • Human error (HE) is an important concern in safety-critical systems such as nuclear power plants (NPPs). HE has played a role in many accidents and outage incidents in NPPs. Despite the increased automation in NPPs, HE remains unavoidable. Hence, the need for HE detection is as important as HE prevention efforts. In NPPs, HE is rather rare. Hence, anomaly detection, a widely used machine learning technique for detecting rare anomalous instances, can be repurposed to detect potential HE. In this study, we develop an unsupervised anomaly detection technique based on generative adversarial networks (GANs) to detect anomalies in manually collected surveillance data in NPPs. More specifically, our GAN is trained to detect mismatches between automatically recorded sensor data and manually collected surveillance data, and hence, identify anomalous instances that can be attributed to HE. We test our GAN on both a real-world dataset and an external dataset obtained from a testbed, and we benchmark our results against state-of-the-art unsupervised anomaly detection algorithms, including one-class support vector machine and isolation forest. Our results show that the proposed GAN provides improved anomaly detection performance. Our study is promising for the future development of artificial intelligence based HE detection systems.

A Case Study of Creative Art Based on AI Generation Technology

  • Qianqian Jiang;Jeanhun Chung
    • International journal of advanced smart convergence
    • /
    • v.12 no.2
    • /
    • pp.84-89
    • /
    • 2023
  • In recent years, with the breakthrough of Artificial Intelligence (AI) technology in deep learning algorithms such as Generative Adversarial Networks (GANs) and Variational Autoencoders (VAE), AI generation technology has rapidly expanded in various sub-sectors in the art field. 2022 as the explosive year of AI-generated art, especially in the creation of AI-generated art creative design, many excellent works have been born, which has improved the work efficiency of art design. This study analyzed the application design characteristics of AI generation technology in two sub fields of artistic creative design of AI painting and AI animation production , and compares the differences between traditional painting and AI painting in the field of painting. Through the research of this paper, the advantages and problems in the process of AI creative design are summarized. Although AI art designs are affected by technical limitations, there are still flaws in artworks and practical problems such as copyright and income, but it provides a strong technical guarantee in the expansion of subdivisions of artistic innovation and technology integration, and has extremely high research value.

StarGAN-Based Detection and Purification Studies to Defend against Adversarial Attacks (적대적 공격을 방어하기 위한 StarGAN 기반의 탐지 및 정화 연구)

  • Sungjune Park;Gwonsang Ryu;Daeseon Choi
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.3
    • /
    • pp.449-458
    • /
    • 2023
  • Artificial Intelligence is providing convenience in various fields using big data and deep learning technologies. However, deep learning technology is highly vulnerable to adversarial examples, which can cause misclassification of classification models. This study proposes a method to detect and purification various adversarial attacks using StarGAN. The proposed method trains a StarGAN model with added Categorical Entropy loss using adversarial examples generated by various attack methods to enable the Discriminator to detect adversarial examples and the Generator to purification them. Experimental results using the CIFAR-10 dataset showed an average detection performance of approximately 68.77%, an average purification performance of approximately 72.20%, and an average defense performance of approximately 93.11% derived from restoration and detection performance.