• Title/Summary/Keyword: Generative adversarial neural networks

Search Result 51, Processing Time 0.207 seconds

High Representation based GAN defense for Adversarial Attack

  • Sutanto, Richard Evan;Lee, Suk Ho
    • International journal of advanced smart convergence
    • /
    • v.8 no.1
    • /
    • pp.141-146
    • /
    • 2019
  • These days, there are many applications using neural networks as parts of their system. On the other hand, adversarial examples have become an important issue concerining the security of neural networks. A classifier in neural networks can be fooled and make it miss-classified by adversarial examples. There are many research to encounter adversarial examples by using denoising methods. Some of them using GAN (Generative Adversarial Network) in order to remove adversarial noise from input images. By producing an image from generator network that is close enough to the original clean image, the adversarial examples effects can be reduced. However, there is a chance when adversarial noise can survive the approximation process because it is not like a normal noise. In this chance, we propose a research that utilizes high-level representation in the classifier by combining GAN network with a trained U-Net network. This approach focuses on minimizing the loss function on high representation terms, in order to minimize the difference between the high representation level of the clean data and the approximated output of the noisy data in the training dataset. Furthermore, the generated output is checked whether it shows minimum error compared to true label or not. U-Net network is trained with true label to make sure the generated output gives minimum error in the end. At last, the remaining adversarial noise that still exist after low-level approximation can be removed with the U-Net, because of the minimization on high representation terms.

Synthetic Image Dataset Generation for Defense using Generative Adversarial Networks (국방용 합성이미지 데이터셋 생성을 위한 대립훈련신경망 기술 적용 연구)

  • Yang, Hunmin
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.22 no.1
    • /
    • pp.49-59
    • /
    • 2019
  • Generative adversarial networks(GANs) have received great attention in the machine learning field for their capacity to model high-dimensional and complex data distribution implicitly and generate new data samples from the model distribution. This paper investigates the model training methodology, architecture, and various applications of generative adversarial networks. Experimental evaluation is also conducted for generating synthetic image dataset for defense using two types of GANs. The first one is for military image generation utilizing the deep convolutional generative adversarial networks(DCGAN). The other is for visible-to-infrared image translation utilizing the cycle-consistent generative adversarial networks(CycleGAN). Each model can yield a great diversity of high-fidelity synthetic images compared to training ones. This result opens up the possibility of using inexpensive synthetic images for training neural networks while avoiding the enormous expense of collecting large amounts of hand-annotated real dataset.

Performance Comparisons of GAN-Based Generative Models for New Product Development (신제품 개발을 위한 GAN 기반 생성모델 성능 비교)

  • Lee, Dong-Hun;Lee, Se-Hun;Kang, Jae-Mo
    • The Journal of the Convergence on Culture Technology
    • /
    • v.8 no.6
    • /
    • pp.867-871
    • /
    • 2022
  • Amid the recent rapid trend change, the change in design has a great impact on the sales of fashion companies, so it is inevitable to be careful in choosing new designs. With the recent development of the artificial intelligence field, various machine learning is being used a lot in the fashion market to increase consumers' preferences. To contribute to increasing reliability in the development of new products by quantifying abstract concepts such as preferences, we generate new images that do not exist through three adversarial generative neural networks (GANs) and numerically compare abstract concepts of preferences using pre-trained convolution neural networks (CNNs). Deep convolutional generative adversarial networks (DCGAN), Progressive growing adversarial networks (PGGAN), and Dual Discriminator generative adversarial networks (DANs), which were trained to produce comparative, high-level, and high-level images. The degree of similarity measured was considered as a preference, and the experimental results showed that D2GAN showed a relatively high similarity compared to DCGAN and PGGAN.

Deep Adversarial Residual Convolutional Neural Network for Image Generation and Classification

  • Haque, Md Foysal;Kang, Dae-Seong
    • Journal of Advanced Information Technology and Convergence
    • /
    • v.10 no.1
    • /
    • pp.111-120
    • /
    • 2020
  • Generative adversarial networks (GANs) achieved impressive performance on image generation and visual classification applications. However, adversarial networks meet difficulties in combining the generative model and unstable training process. To overcome the problem, we combined the deep residual network with upsampling convolutional layers to construct the generative network. Moreover, the study shows that image generation and classification performance become more prominent when the residual layers include on the generator. The proposed network empirically shows that the ability to generate images with higher visual accuracy provided certain amounts of additional complexity using proper regularization techniques. Experimental evaluation shows that the proposed method is superior to image generation and classification tasks.

HiGANCNN: A Hybrid Generative Adversarial Network and Convolutional Neural Network for Glaucoma Detection

  • Alsulami, Fairouz;Alseleahbi, Hind;Alsaedi, Rawan;Almaghdawi, Rasha;Alafif, Tarik;Ikram, Mohammad;Zong, Weiwei;Alzahrani, Yahya;Bawazeer, Ahmed
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.9
    • /
    • pp.23-30
    • /
    • 2022
  • Glaucoma is a chronic neuropathy that affects the optic nerve which can lead to blindness. The detection and prediction of glaucoma become possible using deep neural networks. However, the detection performance relies on the availability of a large number of data. Therefore, we propose different frameworks, including a hybrid of a generative adversarial network and a convolutional neural network to automate and increase the performance of glaucoma detection. The proposed frameworks are evaluated using five public glaucoma datasets. The framework which uses a Deconvolutional Generative Adversarial Network (DCGAN) and a DenseNet pre-trained model achieves 99.6%, 99.08%, 99.4%, 98.69%, and 92.95% of classification accuracy on RIMONE, Drishti-GS, ACRIMA, ORIGA-light, and HRF datasets respectively. Based on the experimental results and evaluation, the proposed framework closely competes with the state-of-the-art methods using the five public glaucoma datasets without requiring any manually preprocessing step.

A Novel Text to Image Conversion Method Using Word2Vec and Generative Adversarial Networks

  • LIU, XINRUI;Joe, Inwhee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2019.05a
    • /
    • pp.401-403
    • /
    • 2019
  • In this paper, we propose a generative adversarial networks (GAN) based text-to-image generating method. In many natural language processing tasks, which word expressions are determined by their term frequency -inverse document frequency scores. Word2Vec is a type of neural network model that, in the case of an unlabeled corpus, produces a vector that expresses semantics for words in the corpus and an image is generated by GAN training according to the obtained vector. Thanks to the understanding of the word we can generate higher and more realistic images. Our GAN structure is based on deep convolution neural networks and pixel recurrent neural networks. Comparing the generated image with the real image, we get about 88% similarity on the Oxford-102 flowers dataset.

A Research on Re-examining Discriminator Design Space for Performance Improvement of ESRGAN (ESRGAN의 성능 향상을 위한 판별자 설계 공간 재검토에 관한 연구)

  • Sung-Wook Park;Jun-Yeong Kim;Jun Park;Se-Hoon Jung;Chun-Bo Sim
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.05a
    • /
    • pp.513-514
    • /
    • 2023
  • 초해상은 저해상도의 영상을 고해상도 영상으로 합성하는 기술이다. 이 기술에 딥러닝이 적용되어, 2014년에는 SRCNN(Super Resolution Convolutional Neural Network) 모델이 발표됐다. 이후에는 SRCAE(Super Resolution Convolutional Autoencoders)와 GAN(Generative Adversarial Networks)을 기반으로 한 SRGAN(Super Resolution Generative Adversarial Networks) 등, SRCNN의 성능을 능가하는 모델들이 발표됐다. ESRGAN(Enhanced Super Resolution Generative Adversarial Networks)은 SRGAN 모델의 성능을 개선했지만, 완벽한 성능을 내지 못하는 문제점이 있다. 이에 본 논문에서는 판별자(Discriminator) 구조를 변경하여 ESRGAN의 성능을 개선한다. 실험 결과, 제안하는 모델이 ESRGAN보다 더 높은 성능을 보일 것으로 기대된다.

Single Image Dehazing: An Analysis on Generative Adversarial Network

  • Amina Khatun;Mohammad Reduanul Haque;Rabeya Basri;Mohammad Shorif Uddin
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.2
    • /
    • pp.136-142
    • /
    • 2024
  • Haze is a very common phenomenon that degrades or reduces the visibility. It causes various problems where high quality images are required such as traffic and security monitoring. So haze removal from images receives great attention for clear vision. Due to its huge impact, significant advances have been achieved but the task yet remains a challenging one. Recently, different types of deep generative adversarial networks (GAN) are applied to suppress the noise and improve the dehazing performance. But it is unclear how these algorithms would perform on hazy images acquired "in the wild" and how we could gauge the progress in the field. This paper aims to bridge this gap. We present a comprehensive study and experimental evaluation on diverse GAN models in single image dehazing through benchmark datasets.

Anomaly Detection for User Action with Generative Adversarial Networks (적대적 생성 모델을 활용한 사용자 행위 이상 탐지 방법)

  • Choi, Nam woong;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.43-62
    • /
    • 2019
  • At one time, the anomaly detection sector dominated the method of determining whether there was an abnormality based on the statistics derived from specific data. This methodology was possible because the dimension of the data was simple in the past, so the classical statistical method could work effectively. However, as the characteristics of data have changed complexly in the era of big data, it has become more difficult to accurately analyze and predict the data that occurs throughout the industry in the conventional way. Therefore, SVM and Decision Tree based supervised learning algorithms were used. However, there is peculiarity that supervised learning based model can only accurately predict the test data, when the number of classes is equal to the number of normal classes and most of the data generated in the industry has unbalanced data class. Therefore, the predicted results are not always valid when supervised learning model is applied. In order to overcome these drawbacks, many studies now use the unsupervised learning-based model that is not influenced by class distribution, such as autoencoder or generative adversarial networks. In this paper, we propose a method to detect anomalies using generative adversarial networks. AnoGAN, introduced in the study of Thomas et al (2017), is a classification model that performs abnormal detection of medical images. It was composed of a Convolution Neural Net and was used in the field of detection. On the other hand, sequencing data abnormality detection using generative adversarial network is a lack of research papers compared to image data. Of course, in Li et al (2018), a study by Li et al (LSTM), a type of recurrent neural network, has proposed a model to classify the abnormities of numerical sequence data, but it has not been used for categorical sequence data, as well as feature matching method applied by salans et al.(2016). So it suggests that there are a number of studies to be tried on in the ideal classification of sequence data through a generative adversarial Network. In order to learn the sequence data, the structure of the generative adversarial networks is composed of LSTM, and the 2 stacked-LSTM of the generator is composed of 32-dim hidden unit layers and 64-dim hidden unit layers. The LSTM of the discriminator consists of 64-dim hidden unit layer were used. In the process of deriving abnormal scores from existing paper of Anomaly Detection for Sequence data, entropy values of probability of actual data are used in the process of deriving abnormal scores. but in this paper, as mentioned earlier, abnormal scores have been derived by using feature matching techniques. In addition, the process of optimizing latent variables was designed with LSTM to improve model performance. The modified form of generative adversarial model was more accurate in all experiments than the autoencoder in terms of precision and was approximately 7% higher in accuracy. In terms of Robustness, Generative adversarial networks also performed better than autoencoder. Because generative adversarial networks can learn data distribution from real categorical sequence data, Unaffected by a single normal data. But autoencoder is not. Result of Robustness test showed that he accuracy of the autocoder was 92%, the accuracy of the hostile neural network was 96%, and in terms of sensitivity, the autocoder was 40% and the hostile neural network was 51%. In this paper, experiments have also been conducted to show how much performance changes due to differences in the optimization structure of potential variables. As a result, the level of 1% was improved in terms of sensitivity. These results suggest that it presented a new perspective on optimizing latent variable that were relatively insignificant.

A Study on Image Creation and Modification Techniques Using Generative Adversarial Neural Networks (생성적 적대 신경망을 활용한 부분 위변조 이미지 생성에 관한 연구)

  • Song, Seong-Heon;Choi, Bong-Jun;Moon, M-Ikyeong
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.2
    • /
    • pp.291-298
    • /
    • 2022
  • A generative adversarial network (GAN) is a network in which two internal neural networks (generative network and discriminant network) learn while competing with each other. The generator creates an image close to reality, and the delimiter is programmed to better discriminate the image of the constructor. This technology is being used in various ways to create, transform, and restore the entire image X into another image Y. This paper describes a method that can be forged into another object naturally, after extracting only a partial image from the original image. First, a new image is created through the previously trained DCGAN model, after extracting only a partial image from the original image. The original image goes through a process of naturally combining with, after re-styling it to match the texture and size of the original image using the overall style transfer technique. Through this study, the user can naturally add/transform the desired object image to a specific part of the original image, so it can be used as another field of application for creating fake images.