• 제목/요약/키워드: gans

검색결과 62건 처리시간 0.023초

생성적 적대 신경망을 이용한 함정전투체계 획득 영상의 초고해상도 영상 복원 연구 (A Study on Super Resolution Image Reconstruction for Acquired Images from Naval Combat System using Generative Adversarial Networks)

  • 김동영
    • 디지털콘텐츠학회 논문지
    • /
    • 제19권6호
    • /
    • pp.1197-1205
    • /
    • 2018
  • 본 논문에서는 함정전투체계의 EOTS나 IRST에서 획득한 영상을 초고해상도 영상으로 복원한다. 저해상도에서 초고해상도의 영상을 생성하는 생성 모델과 이를 판별하는 판별 모델로 구성된 생성적 적대 신경망을 이용하고, 다양한 학습 파라미터의 변화를 통한 최적의 값을 제안한다. 실험에 사용되는 학습 파라미터는 crop size와 sub-pixel layer depth, 학습 이미지 종류로 구성되며, 평가는 일반적인 영상 품질 평가 지표에 추가적으로 특징점 추출 알고리즘을 함께 사용하였다. 그 결과, Crop size가 클수록, Sub-pixel layer depth가 깊을수록, 고해상도의 학습이미지를 사용할수록 더 좋은 품질의 영상을 생성한다.

광산란과 입자포집을 이용한 동축류 확산화염 내의 실리카 입자의 성장 측정(I) - 화염온도의 영향 - (An Experimental Study of Silica Particle Growth in a Coflow Diffusion Flame Utilizing Light Scattering and Local Sampling Technique (I) - Effects of Flame Temperature -)

  • 조재걸;이정훈;김현우;최만수
    • 대한기계학회논문집B
    • /
    • 제23권9호
    • /
    • pp.1139-1150
    • /
    • 1999
  • The evolution of silica aggregate particles in coflow diffusion flames has been studied experimentally using light scattering and thermophoretic sampling techniques. The measurements of scattering cross section from $90^{\circ}$ light scattering have been utilized to calculate the aggregate number density and volume fraction using with combination of measuring the particle size and morphology through the localized sampling and a TEM image analysis. Aggregate or particle number densities and volume fractions were calculated using Rayleigh-Debye-Gans and Mie theory for fractal aggregates and spherical particles, respectively. Of particular interests are the effects of flame temperature on the evolution of silica aggregate particles. As the flow rate of $H_2$ increases, the primary particle diameters of silica aggregates have been first decreased, but, further increase of $H_2$ flow rate causes the diameter of primary particles to increase and for sufficiently larger flow rates, the fractal aggregates finally become spherical particles. The variation of primary particle size along the upward jet centerline and the effect of burner configuration have also been studied.

Detecting Malicious Social Robots with Generative Adversarial Networks

  • Wu, Bin;Liu, Le;Dai, Zhengge;Wang, Xiujuan;Zheng, Kangfeng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제13권11호
    • /
    • pp.5594-5615
    • /
    • 2019
  • Malicious social robots, which are disseminators of malicious information on social networks, seriously affect information security and network environments. The detection of malicious social robots is a hot topic and a significant concern for researchers. A method based on classification has been widely used for social robot detection. However, this method of classification is limited by an unbalanced data set in which legitimate, negative samples outnumber malicious robots (positive samples), which leads to unsatisfactory detection results. This paper proposes the use of generative adversarial networks (GANs) to extend the unbalanced data sets before training classifiers to improve the detection of social robots. Five popular oversampling algorithms were compared in the experiments, and the effects of imbalance degree and the expansion ratio of the original data on oversampling were studied. The experimental results showed that the proposed method achieved better detection performance compared with other algorithms in terms of the F1 measure. The GAN method also performed well when the imbalance degree was smaller than 15%.

Application of Deep Learning to Solar Data: 3. Generation of Solar images from Galileo sunspot drawings

  • Lee, Harim;Moon, Yong-Jae;Park, Eunsu;Jeong, Hyunjin;Kim, Taeyoung;Shin, Gyungin
    • 천문학회보
    • /
    • 제44권1호
    • /
    • pp.81.2-81.2
    • /
    • 2019
  • We develop an image-to-image translation model, which is a popular deep learning method based on conditional Generative Adversarial Networks (cGANs), to generate solar magnetograms and EUV images from sunspot drawings. For this, we train the model using pairs of sunspot drawings from Mount Wilson Observatory (MWO) and their corresponding SDO/HMI magnetograms and SDO/AIA EUV images (512 by 512) from January 2012 to September 2014. We test the model by comparing pairs of actual SDO images (magnetogram and EUV images) and the corresponding AI-generated ones from October to December in 2014. Our results show that bipolar structures and coronal loop structures of AI-generated images are consistent with those of the original ones. We find that their unsigned magnetic fluxes well correlate with those of the original ones with a good correlation coefficient of 0.86. We also obtain pixel-to-pixel correlations EUV images and AI-generated ones. The average correlations of 92 test samples for several SDO lines are very good: 0.88 for AIA 211, 0.87 for AIA 1600 and 0.93 for AIA 1700. These facts imply that AI-generated EUV images quite similar to AIA ones. Applying this model to the Galileo sunspot drawings in 1612, we generate HMI-like magnetograms and AIA-like EUV images of the sunspots. This application will be used to generate solar images using historical sunspot drawings.

  • PDF

Application of Deep Learning to Solar Data: 1. Overview

  • Moon, Yong-Jae;Park, Eunsu;Kim, Taeyoung;Lee, Harim;Shin, Gyungin;Kim, Kimoon;Shin, Seulki;Yi, Kangwoo
    • 천문학회보
    • /
    • 제44권1호
    • /
    • pp.51.2-51.2
    • /
    • 2019
  • Multi-wavelength observations become very popular in astronomy. Even though there are some correlations among different sensor images, it is not easy to translate from one to the other one. In this study, we apply a deep learning method for image-to-image translation, based on conditional generative adversarial networks (cGANs), to solar images. To examine the validity of the method for scientific data, we consider several different types of pairs: (1) Generation of SDO/EUV images from SDO/HMI magnetograms, (2) Generation of backside magnetograms from STEREO/EUVI images, (3) Generation of EUV & X-ray images from Carrington sunspot drawing, and (4) Generation of solar magnetograms from Ca II images. It is very impressive that AI-generated ones are quite consistent with actual ones. In addition, we apply the convolution neural network to the forecast of solar flares and find that our method is better than the conventional method. Our study also shows that the forecast of solar proton flux profiles using Long and Short Term Memory method is better than the autoregressive method. We will discuss several applications of these methodologies for scientific research.

  • PDF

Few-Shot Content-Level Font Generation

  • Majeed, Saima;Hassan, Ammar Ul;Choi, Jaeyoung
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제16권4호
    • /
    • pp.1166-1186
    • /
    • 2022
  • Artistic font design has become an integral part of visual media. However, without prior knowledge of the font domain, it is difficult to create distinct font styles. When the number of characters is limited, this task becomes easier (e.g., only Latin characters). However, designing CJK (Chinese, Japanese, and Korean) characters presents a challenge due to the large number of character sets and complexity of the glyph components in these languages. Numerous studies have been conducted on automating the font design process using generative adversarial networks (GANs). Existing methods rely heavily on reference fonts and perform font style conversions between different fonts. Additionally, rather than capturing style information for a target font via multiple style images, most methods do so via a single font image. In this paper, we propose a network architecture for generating multilingual font sets that makes use of geometric structures as content. Additionally, to acquire sufficient style information, we employ multiple style images belonging to a single font style simultaneously to extract global font style-specific information. By utilizing the geometric structural information of content and a few stylized images, our model can generate an entire font set while maintaining the style. Extensive experiments were conducted to demonstrate the proposed model's superiority over several baseline methods. Additionally, we conducted ablation studies to validate our proposed network architecture.

Using artificial intelligence to detect human errors in nuclear power plants: A case in operation and maintenance

  • Ezgi Gursel ;Bhavya Reddy ;Anahita Khojandi;Mahboubeh Madadi;Jamie Baalis Coble;Vivek Agarwal ;Vaibhav Yadav;Ronald L. Boring
    • Nuclear Engineering and Technology
    • /
    • 제55권2호
    • /
    • pp.603-622
    • /
    • 2023
  • Human error (HE) is an important concern in safety-critical systems such as nuclear power plants (NPPs). HE has played a role in many accidents and outage incidents in NPPs. Despite the increased automation in NPPs, HE remains unavoidable. Hence, the need for HE detection is as important as HE prevention efforts. In NPPs, HE is rather rare. Hence, anomaly detection, a widely used machine learning technique for detecting rare anomalous instances, can be repurposed to detect potential HE. In this study, we develop an unsupervised anomaly detection technique based on generative adversarial networks (GANs) to detect anomalies in manually collected surveillance data in NPPs. More specifically, our GAN is trained to detect mismatches between automatically recorded sensor data and manually collected surveillance data, and hence, identify anomalous instances that can be attributed to HE. We test our GAN on both a real-world dataset and an external dataset obtained from a testbed, and we benchmark our results against state-of-the-art unsupervised anomaly detection algorithms, including one-class support vector machine and isolation forest. Our results show that the proposed GAN provides improved anomaly detection performance. Our study is promising for the future development of artificial intelligence based HE detection systems.

A Case Study of Creative Art Based on AI Generation Technology

  • Qianqian Jiang;Jeanhun Chung
    • International journal of advanced smart convergence
    • /
    • 제12권2호
    • /
    • pp.84-89
    • /
    • 2023
  • In recent years, with the breakthrough of Artificial Intelligence (AI) technology in deep learning algorithms such as Generative Adversarial Networks (GANs) and Variational Autoencoders (VAE), AI generation technology has rapidly expanded in various sub-sectors in the art field. 2022 as the explosive year of AI-generated art, especially in the creation of AI-generated art creative design, many excellent works have been born, which has improved the work efficiency of art design. This study analyzed the application design characteristics of AI generation technology in two sub fields of artistic creative design of AI painting and AI animation production , and compares the differences between traditional painting and AI painting in the field of painting. Through the research of this paper, the advantages and problems in the process of AI creative design are summarized. Although AI art designs are affected by technical limitations, there are still flaws in artworks and practical problems such as copyright and income, but it provides a strong technical guarantee in the expansion of subdivisions of artistic innovation and technology integration, and has extremely high research value.

Image Translation of SDO/AIA Multi-Channel Solar UV Images into Another Single-Channel Image by Deep Learning

  • Lim, Daye;Moon, Yong-Jae;Park, Eunsu;Lee, Jin-Yi
    • 천문학회보
    • /
    • 제44권2호
    • /
    • pp.42.3-42.3
    • /
    • 2019
  • We translate Solar Dynamics Observatory/Atmospheric Imaging Assembly (AIA) ultraviolet (UV) multi-channel images into another UV single-channel image using a deep learning algorithm based on conditional generative adversarial networks (cGANs). The base input channel, which has the highest correlation coefficient (CC) between UV channels of AIA, is 193 Å. To complement this channel, we choose two channels, 1600 and 304 Å, which represent upper photosphere and chromosphere, respectively. Input channels for three models are single (193 Å), dual (193+1600 Å), and triple (193+1600+304 Å), respectively. Quantitative comparisons are made for test data sets. Main results from this study are as follows. First, the single model successfully produce other coronal channel images but less successful for chromospheric channel (304 Å) and much less successful for two photospheric channels (1600 and 1700 Å). Second, the dual model shows a noticeable improvement of the CC between the model outputs and Ground truths for 1700 Å. Third, the triple model can generate all other channel images with relatively high CCs larger than 0.89. Our results show a possibility that if three channels from photosphere, chromosphere, and corona are selected, other multi-channel images could be generated by deep learning. We expect that this investigation will be a complementary tool to choose a few UV channels for future solar small and/or deep space missions.

  • PDF

Study on 2D Sprite *3.Generation Using the Impersonator Network

  • Yongjun Choi;Beomjoo Seo;Shinjin Kang;Jongin Choi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제17권7호
    • /
    • pp.1794-1806
    • /
    • 2023
  • This study presents a method for capturing photographs of users as input and converting them into 2D character animation sprites using a generative adversarial network-based artificial intelligence network. Traditionally, 2D character animations have been created by manually creating an entire sequence of sprite images, which incurs high development costs. To address this issue, this study proposes a technique that combines motion videos and sample 2D images. In the 2D sprite generation process that uses the proposed technique, a sequence of images is extracted from real-life images captured by the user, and these are combined with character images from within the game. Our research aims to leverage cutting-edge deep learning-based image manipulation techniques, such as the GAN-based motion transfer network (impersonator) and background noise removal (U2 -Net), to generate a sequence of animation sprites from a single image. The proposed technique enables the creation of diverse animations and motions just one image. By utilizing these advancements, we focus on enhancing productivity in the game and animation industry through improved efficiency and streamlined production processes. By employing state-of-the-art techniques, our research enables the generation of 2D sprite images with various motions, offering significant potential for boosting productivity and creativity in the industry.