• 제목/요약/키워드: Generative Adversarial Networks (GANs)

검색결과 43건 처리시간 0.03초

Optimizing SR-GAN for Resource-Efficient Single-Image Super-Resolution via Knowledge Distillation

  • Sajid Hussain;Jung-Hun Shin;Kum-Won Cho
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2023년도 춘계학술발표대회
    • /
    • pp.479-481
    • /
    • 2023
  • Generative Adversarial Networks (GANs) have facilitated substantial improvement in single-image super-resolution (SR) by enabling the generation of photo-realistic images. However, the high memory requirements of GAN-based SRs (mainly generators) lead to reduced performance and increased energy consumption, making it difficult to implement them onto resource-constricted devices. In this study, we propose an efficient and compressed architecture for the SR-GAN (generator) model using the model compression technique Knowledge Distillation. Our approach involves the transmission of knowledge from a heavy network to a lightweight one, which reduces the storage requirement of the model by 58% with also an increase in their performance. Experimental results on various benchmarks indicate that our proposed compressed model enhances performance with an increase in PSNR, SSIM, and image quality respectively for x4 super-resolution tasks.

Vehicle Detection at Night Based on Style Transfer Image Enhancement

  • Jianing Shen;Rong Li
    • Journal of Information Processing Systems
    • /
    • 제19권5호
    • /
    • pp.663-672
    • /
    • 2023
  • Most vehicle detection methods have poor vehicle feature extraction performance at night, and their robustness is reduced; hence, this study proposes a night vehicle detection method based on style transfer image enhancement. First, a style transfer model is constructed using cycle generative adversarial networks (cycleGANs). The daytime data in the BDD100K dataset were converted into nighttime data to form a style dataset. The dataset was then divided using its labels. Finally, based on a YOLOv5s network, a nighttime vehicle image is detected for the reliable recognition of vehicle information in a complex environment. The experimental results of the proposed method based on the BDD100K dataset show that the transferred night vehicle images are clear and meet the requirements. The precision, recall, mAP@.5, and mAP@.5:.95 reached 0.696, 0.292, 0.761, and 0.454, respectively.

GANs를 이용한 하천의 첨두수위 예측 기법 개발 : 잠수교 적용 (Development of a Peak Water Level Prediction Technique Using GANs : Application to Jamsu Bridge, Korea)

  • 이승연;김영인;이승오
    • 한국수자원학회:학술대회논문집
    • /
    • 한국수자원학회 2020년도 학술발표회
    • /
    • pp.416-416
    • /
    • 2020
  • 우리나라의 계절 특성상 여름철 집중호우가 쏟아지는 현상이 빈번하게 발생하는데 이러한 돌발홍수가 예고 없이 일어나 상습적으로 침수 피해를 입는 지역이 증가하고 있다. 본 연구에서 2009년 ~ 2019년 동안 서울시 침수 피해 사건 중심의 인터넷 기사를 기반으로 실제 침수 사례를 조사해본 결과, 침수가 가장 많이 발생한 순으로 반포동(26건), 대치동(25건), 잠실동(21건)으로 집계되었다. 침수피해가 가장 많은 반포동을 연구지역으로 선정하고 그 중 잠수교의 수위를 예측하는 연구를 진행하였다. 기존 연구에서는 수치모형에 비해 신속한 결과를 도출할 수 있는 자료 기반 모형 중 LSTM 기법을 많이 사용하였다. 그러나 이는 선행 시간이 길어질수록 첨두수위에서 과소추정된 것으로 분석된 취약점이 존재하였다(정성호 외, 2018). 본 연구에서는 이러한 단점을 보완하기 위해 GANs(Generative Adversarial Networks)를 이용하였다. GANs는 생성자와 감별자가 나뉘어 생성자가 실제 자료인 첨두수위에서의 잠수교의 수위를 학습하고 실제와 근접한 가상데이터를 결과로 생성하여 감별자는 그 생성된 미래의 잠수교의 수위가 실제인지 가상인지 판별하도록 학습시키는 신경망 구조이다. 사용한 수문자료는 한강홍수통제소, 기상청, 국립해양조사원에서 제공하는 최근 15년간의 (2005년~2019년) 수위, 방류량, 강수량, 조위 자료를 수집하였고 t-test와 상관성분석을 통해 사용한 인자 간의 유의미성 판단과 상관성을 분석했다. 또한, 민감도 분석 결과 시퀀스길이(5), 반복횟수(1000), 은닉층(10), 학습률(0.005)로 최적값을 선정하였다. 또한 학습구간(2005년~2014년)과 검증구간(2015~2019년)으로 나누어 상대적으로 높은 수위가 관측되는 홍수기의 3, 6, 9시간 후의 수위를 예측하고 오차 지표를 이용해 평가하였다. LSTM 기법으로 예측된 수위와 GANs로 예측된 수위를 비교한 결과 GANs으로 예측된 첨두수위에서의 정확도가 5% 정도로 향상되었다. 향후에는 다양한 영향인자와 다른 기법과의 결합을 고려한다면 보다 정확하게 수위를 예측하여 하천 주변 사회기반시설의 침수 피해를 감소시킬 것으로 판단된다.

  • PDF

딥러닝을 이용한 광학적 프린지 패턴의 생성 (Generation of optical fringe patterns using deep learning)

  • 강지원;김동욱;서영호
    • 한국정보통신학회논문지
    • /
    • 제24권12호
    • /
    • pp.1588-1594
    • /
    • 2020
  • 본 논문에서는 심층신경망(deep neural network, DNN)을 이용하여 디지털 홀로그램을 생성하는 신경망의 학습을 위한 데이터 균형 조정 방법에 대하여 논의 한다. 심층신경망은 딥러닝(deep learning, DL) 기술에 기반을 두고 있고, 생성형 적대적 네트워크(generative adversarial network, GAN)계열을 이용한다. 심층 신경망을 통하여 생성 하고자하는 홀로그램의 기본 단위인 프린지 패턴은 홀로그램 평면과 객체의 위치에 따라 데이터의 형태가 매우 다르다. 하지만 데이터의 분류 기준이 명확하지 않기 때문에 학습 데이터의 불균형이 생길 수 있다. 학습 데이터의 불균형은 곧 학습의 불안정 요소로 작용한다. 따라서 분류 기준이 명확하지 않은 데이터를 분류하고 균형을 맞추는 방법을 제시한다. 그리고 이를 통하여 학습이 안정화됨을 보인다.

Application of Deep Learning to Solar Data: 3. Generation of Solar images from Galileo sunspot drawings

  • Lee, Harim;Moon, Yong-Jae;Park, Eunsu;Jeong, Hyunjin;Kim, Taeyoung;Shin, Gyungin
    • 천문학회보
    • /
    • 제44권1호
    • /
    • pp.81.2-81.2
    • /
    • 2019
  • We develop an image-to-image translation model, which is a popular deep learning method based on conditional Generative Adversarial Networks (cGANs), to generate solar magnetograms and EUV images from sunspot drawings. For this, we train the model using pairs of sunspot drawings from Mount Wilson Observatory (MWO) and their corresponding SDO/HMI magnetograms and SDO/AIA EUV images (512 by 512) from January 2012 to September 2014. We test the model by comparing pairs of actual SDO images (magnetogram and EUV images) and the corresponding AI-generated ones from October to December in 2014. Our results show that bipolar structures and coronal loop structures of AI-generated images are consistent with those of the original ones. We find that their unsigned magnetic fluxes well correlate with those of the original ones with a good correlation coefficient of 0.86. We also obtain pixel-to-pixel correlations EUV images and AI-generated ones. The average correlations of 92 test samples for several SDO lines are very good: 0.88 for AIA 211, 0.87 for AIA 1600 and 0.93 for AIA 1700. These facts imply that AI-generated EUV images quite similar to AIA ones. Applying this model to the Galileo sunspot drawings in 1612, we generate HMI-like magnetograms and AIA-like EUV images of the sunspots. This application will be used to generate solar images using historical sunspot drawings.

  • PDF

Application of Deep Learning to Solar Data: 1. Overview

  • Moon, Yong-Jae;Park, Eunsu;Kim, Taeyoung;Lee, Harim;Shin, Gyungin;Kim, Kimoon;Shin, Seulki;Yi, Kangwoo
    • 천문학회보
    • /
    • 제44권1호
    • /
    • pp.51.2-51.2
    • /
    • 2019
  • Multi-wavelength observations become very popular in astronomy. Even though there are some correlations among different sensor images, it is not easy to translate from one to the other one. In this study, we apply a deep learning method for image-to-image translation, based on conditional generative adversarial networks (cGANs), to solar images. To examine the validity of the method for scientific data, we consider several different types of pairs: (1) Generation of SDO/EUV images from SDO/HMI magnetograms, (2) Generation of backside magnetograms from STEREO/EUVI images, (3) Generation of EUV & X-ray images from Carrington sunspot drawing, and (4) Generation of solar magnetograms from Ca II images. It is very impressive that AI-generated ones are quite consistent with actual ones. In addition, we apply the convolution neural network to the forecast of solar flares and find that our method is better than the conventional method. Our study also shows that the forecast of solar proton flux profiles using Long and Short Term Memory method is better than the autoregressive method. We will discuss several applications of these methodologies for scientific research.

  • PDF

Using artificial intelligence to detect human errors in nuclear power plants: A case in operation and maintenance

  • Ezgi Gursel ;Bhavya Reddy ;Anahita Khojandi;Mahboubeh Madadi;Jamie Baalis Coble;Vivek Agarwal ;Vaibhav Yadav;Ronald L. Boring
    • Nuclear Engineering and Technology
    • /
    • 제55권2호
    • /
    • pp.603-622
    • /
    • 2023
  • Human error (HE) is an important concern in safety-critical systems such as nuclear power plants (NPPs). HE has played a role in many accidents and outage incidents in NPPs. Despite the increased automation in NPPs, HE remains unavoidable. Hence, the need for HE detection is as important as HE prevention efforts. In NPPs, HE is rather rare. Hence, anomaly detection, a widely used machine learning technique for detecting rare anomalous instances, can be repurposed to detect potential HE. In this study, we develop an unsupervised anomaly detection technique based on generative adversarial networks (GANs) to detect anomalies in manually collected surveillance data in NPPs. More specifically, our GAN is trained to detect mismatches between automatically recorded sensor data and manually collected surveillance data, and hence, identify anomalous instances that can be attributed to HE. We test our GAN on both a real-world dataset and an external dataset obtained from a testbed, and we benchmark our results against state-of-the-art unsupervised anomaly detection algorithms, including one-class support vector machine and isolation forest. Our results show that the proposed GAN provides improved anomaly detection performance. Our study is promising for the future development of artificial intelligence based HE detection systems.

A Case Study of Creative Art Based on AI Generation Technology

  • Qianqian Jiang;Jeanhun Chung
    • International journal of advanced smart convergence
    • /
    • 제12권2호
    • /
    • pp.84-89
    • /
    • 2023
  • In recent years, with the breakthrough of Artificial Intelligence (AI) technology in deep learning algorithms such as Generative Adversarial Networks (GANs) and Variational Autoencoders (VAE), AI generation technology has rapidly expanded in various sub-sectors in the art field. 2022 as the explosive year of AI-generated art, especially in the creation of AI-generated art creative design, many excellent works have been born, which has improved the work efficiency of art design. This study analyzed the application design characteristics of AI generation technology in two sub fields of artistic creative design of AI painting and AI animation production , and compares the differences between traditional painting and AI painting in the field of painting. Through the research of this paper, the advantages and problems in the process of AI creative design are summarized. Although AI art designs are affected by technical limitations, there are still flaws in artworks and practical problems such as copyright and income, but it provides a strong technical guarantee in the expansion of subdivisions of artistic innovation and technology integration, and has extremely high research value.

Image Translation of SDO/AIA Multi-Channel Solar UV Images into Another Single-Channel Image by Deep Learning

  • Lim, Daye;Moon, Yong-Jae;Park, Eunsu;Lee, Jin-Yi
    • 천문학회보
    • /
    • 제44권2호
    • /
    • pp.42.3-42.3
    • /
    • 2019
  • We translate Solar Dynamics Observatory/Atmospheric Imaging Assembly (AIA) ultraviolet (UV) multi-channel images into another UV single-channel image using a deep learning algorithm based on conditional generative adversarial networks (cGANs). The base input channel, which has the highest correlation coefficient (CC) between UV channels of AIA, is 193 Å. To complement this channel, we choose two channels, 1600 and 304 Å, which represent upper photosphere and chromosphere, respectively. Input channels for three models are single (193 Å), dual (193+1600 Å), and triple (193+1600+304 Å), respectively. Quantitative comparisons are made for test data sets. Main results from this study are as follows. First, the single model successfully produce other coronal channel images but less successful for chromospheric channel (304 Å) and much less successful for two photospheric channels (1600 and 1700 Å). Second, the dual model shows a noticeable improvement of the CC between the model outputs and Ground truths for 1700 Å. Third, the triple model can generate all other channel images with relatively high CCs larger than 0.89. Our results show a possibility that if three channels from photosphere, chromosphere, and corona are selected, other multi-channel images could be generated by deep learning. We expect that this investigation will be a complementary tool to choose a few UV channels for future solar small and/or deep space missions.

  • PDF

Solar farside magnetograms from deep learning analysis of STEREO/EUVI data

  • Kim, Taeyoung;Park, Eunsu;Lee, Harim;Moon, Yong-Jae;Bae, Sung-Ho;Lim, Daye;Jang, Soojeong;Kim, Lokwon;Cho, Il-Hyun;Choi, Myungjin;Cho, Kyung-Suk
    • 천문학회보
    • /
    • 제44권1호
    • /
    • pp.51.3-51.3
    • /
    • 2019
  • Solar magnetograms are important for studying solar activity and predicting space weather disturbances1. Farside magnetograms can be constructed from local helioseismology without any farside data2-4, but their quality is lower than that of typical frontside magnetograms. Here we generate farside solar magnetograms from STEREO/Extreme UltraViolet Imager (EUVI) $304-{\AA}$ images using a deep learning model based on conditional generative adversarial networks (cGANs). We train the model using pairs of Solar Dynamics Observatory (SDO)/Atmospheric Imaging Assembly (AIA) $304-{\AA}$ images and SDO/Helioseismic and Magnetic Imager (HMI) magnetograms taken from 2011 to 2017 except for September and October each year. We evaluate the model by comparing pairs of SDO/HMI magnetograms and cGAN-generated magnetograms in September and October. Our method successfully generates frontside solar magnetograms from SDO/AIA $304-{\AA}$ images and these are similar to those of the SDO/HMI, with Hale-patterned active regions being well replicated. Thus we can monitor the temporal evolution of magnetic fields from the farside to the frontside of the Sun using SDO/HMI and farside magnetograms generated by our model when farside extreme-ultraviolet data are available. This study presents an application of image-to-image translation based on cGANs to scientific data.

  • PDF