• Title/Summary/Keyword: GAN(Generative Adversarial Networks)

Search Result 113, Processing Time 0.025 seconds

A Study on Image Creation and Modification Techniques Using Generative Adversarial Neural Networks (생성적 적대 신경망을 활용한 부분 위변조 이미지 생성에 관한 연구)

  • Song, Seong-Heon;Choi, Bong-Jun;Moon, M-Ikyeong
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.2
    • /
    • pp.291-298
    • /
    • 2022
  • A generative adversarial network (GAN) is a network in which two internal neural networks (generative network and discriminant network) learn while competing with each other. The generator creates an image close to reality, and the delimiter is programmed to better discriminate the image of the constructor. This technology is being used in various ways to create, transform, and restore the entire image X into another image Y. This paper describes a method that can be forged into another object naturally, after extracting only a partial image from the original image. First, a new image is created through the previously trained DCGAN model, after extracting only a partial image from the original image. The original image goes through a process of naturally combining with, after re-styling it to match the texture and size of the original image using the overall style transfer technique. Through this study, the user can naturally add/transform the desired object image to a specific part of the original image, so it can be used as another field of application for creating fake images.

MSaGAN: Improved SaGAN using Guide Mask and Multitask Learning Approach for Facial Attribute Editing

  • Yang, Hyeon Seok;Han, Jeong Hoon;Moon, Young Shik
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.5
    • /
    • pp.37-46
    • /
    • 2020
  • Recently, studies of facial attribute editing have obtained realistic results using generative adversarial net (GAN) and encoder-decoder structure. Spatial attention GAN (SaGAN), one of the latest researches, is the method that can change only desired attribute in a face image by spatial attention mechanism. However, sometimes unnatural results are obtained due to insufficient information on face areas. In this paper, we propose an improved SaGAN (MSaGAN) using a guide mask for learning and applying multitask learning approach to improve the limitations of the existing methods. Through extensive experiments, we evaluated the results of the facial attribute editing in therms of the mask loss function and the neural network structure. It has been shown that the proposed method can efficiently produce more natural results compared to the previous methods.

A Research on Re-examining Discriminator Design Space for Performance Improvement of ESRGAN (ESRGAN의 성능 향상을 위한 판별자 설계 공간 재검토에 관한 연구)

  • Sung-Wook Park;Jun-Yeong Kim;Jun Park;Se-Hoon Jung;Chun-Bo Sim
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.05a
    • /
    • pp.513-514
    • /
    • 2023
  • 초해상은 저해상도의 영상을 고해상도 영상으로 합성하는 기술이다. 이 기술에 딥러닝이 적용되어, 2014년에는 SRCNN(Super Resolution Convolutional Neural Network) 모델이 발표됐다. 이후에는 SRCAE(Super Resolution Convolutional Autoencoders)와 GAN(Generative Adversarial Networks)을 기반으로 한 SRGAN(Super Resolution Generative Adversarial Networks) 등, SRCNN의 성능을 능가하는 모델들이 발표됐다. ESRGAN(Enhanced Super Resolution Generative Adversarial Networks)은 SRGAN 모델의 성능을 개선했지만, 완벽한 성능을 내지 못하는 문제점이 있다. 이에 본 논문에서는 판별자(Discriminator) 구조를 변경하여 ESRGAN의 성능을 개선한다. 실험 결과, 제안하는 모델이 ESRGAN보다 더 높은 성능을 보일 것으로 기대된다.

StarGAN-Based Detection and Purification Studies to Defend against Adversarial Attacks (적대적 공격을 방어하기 위한 StarGAN 기반의 탐지 및 정화 연구)

  • Sungjune Park;Gwonsang Ryu;Daeseon Choi
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.3
    • /
    • pp.449-458
    • /
    • 2023
  • Artificial Intelligence is providing convenience in various fields using big data and deep learning technologies. However, deep learning technology is highly vulnerable to adversarial examples, which can cause misclassification of classification models. This study proposes a method to detect and purification various adversarial attacks using StarGAN. The proposed method trains a StarGAN model with added Categorical Entropy loss using adversarial examples generated by various attack methods to enable the Discriminator to detect adversarial examples and the Generator to purification them. Experimental results using the CIFAR-10 dataset showed an average detection performance of approximately 68.77%, an average purification performance of approximately 72.20%, and an average defense performance of approximately 93.11% derived from restoration and detection performance.

Deep Learning based Human Recognition using Integration of GAN and Spatial Domain Techniques

  • Sharath, S;Rangaraju, HG
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.8
    • /
    • pp.127-136
    • /
    • 2021
  • Real-time human recognition is a challenging task, as the images are captured in an unconstrained environment with different poses, makeups, and styles. This limitation is addressed by generating several facial images with poses, makeup, and styles with a single reference image of a person using Generative Adversarial Networks (GAN). In this paper, we propose deep learning-based human recognition using integration of GAN and Spatial Domain Techniques. A novel concept of human recognition based on face depiction approach by generating several dissimilar face images from single reference face image using Domain Transfer Generative Adversarial Networks (DT-GAN) combined with feature extraction techniques such as Local Binary Pattern (LBP) and Histogram is deliberated. The Euclidean Distance (ED) is used in the matching section for comparison of features to test the performance of the method. A database of millions of people with a single reference face image per person, instead of multiple reference face images, is created and saved on the centralized server, which helps to reduce memory load on the centralized server. It is noticed that the recognition accuracy is 100% for smaller size datasets and a little less accuracy for larger size datasets and also, results are compared with present methods to show the superiority of proposed method.

Perceptual Photo Enhancement with Generative Adversarial Networks (GAN 신경망을 통한 자각적 사진 향상)

  • Que, Yue;Lee, Hyo Jong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2019.05a
    • /
    • pp.522-524
    • /
    • 2019
  • In spite of a rapid development in the quality of built-in mobile cameras, their some physical restrictions hinder them to achieve the satisfactory results of digital single lens reflex (DSLR) cameras. In this work we propose an end-to-end deep learning method to translate ordinary images by mobile cameras into DSLR-quality photos. The method is based on the framework of generative adversarial networks (GANs) with several improvements. First, we combined the U-Net with DenseNet and connected dense block (DB) in terms of U-Net. The Dense U-Net acts as the generator in our GAN model. Then, we improved the perceptual loss by using the VGG features and pixel-wise content, which could provide stronger supervision for contrast enhancement and texture recovery.

SkelGAN: A Font Image Skeletonization Method

  • Ko, Debbie Honghee;Hassan, Ammar Ul;Majeed, Saima;Choi, Jaeyoung
    • Journal of Information Processing Systems
    • /
    • v.17 no.1
    • /
    • pp.1-13
    • /
    • 2021
  • In this research, we study the problem of font image skeletonization using an end-to-end deep adversarial network, in contrast with the state-of-the-art methods that use mathematical algorithms. Several studies have been concerned with skeletonization, but a few have utilized deep learning. Further, no study has considered generative models based on deep neural networks for font character skeletonization, which are more delicate than natural objects. In this work, we take a step closer to producing realistic synthesized skeletons of font characters. We consider using an end-to-end deep adversarial network, SkelGAN, for font-image skeletonization, in contrast with the state-of-the-art methods that use mathematical algorithms. The proposed skeleton generator is proved superior to all well-known mathematical skeletonization methods in terms of character structure, including delicate strokes, serifs, and even special styles. Experimental results also demonstrate the dominance of our method against the state-of-the-art supervised image-to-image translation method in font character skeletonization task.

GAN-based Color Palette Extraction System by Chroma Fine-tuning with Reinforcement Learning

  • Kim, Sanghyuk;Kang, Suk-Ju
    • Journal of Semiconductor Engineering
    • /
    • v.2 no.1
    • /
    • pp.125-129
    • /
    • 2021
  • As the interest of deep learning, techniques to control the color of images in image processing field are evolving together. However, there is no clear standard for color, and it is not easy to find a way to represent only the color itself like the color-palette. In this paper, we propose a novel color palette extraction system by chroma fine-tuning with reinforcement learning. It helps to recognize the color combination to represent an input image. First, we use RGBY images to create feature maps by transferring the backbone network with well-trained model-weight which is verified at super resolution convolutional neural networks. Second, feature maps are trained to 3 fully connected layers for the color-palette generation with a generative adversarial network (GAN). Third, we use the reinforcement learning method which only changes chroma information of the GAN-output by slightly moving each Y component of YCbCr color gamut of pixel values up and down. The proposed method outperforms existing color palette extraction methods as given the accuracy of 0.9140.

Research Trends of Generative Adversarial Networks and Image Generation and Translation (GAN 적대적 생성 신경망과 이미지 생성 및 변환 기술 동향)

  • Jo, Y.J.;Bae, K.M.;Park, J.Y.
    • Electronics and Telecommunications Trends
    • /
    • v.35 no.4
    • /
    • pp.91-102
    • /
    • 2020
  • Recently, generative adversarial networks (GANs) is a field of research that has rapidly emerged wherein many studies conducted shows overwhelming results. Initially, this was at the level of imitating the training dataset. However, the GAN is currently useful in many fields, such as transformation of data categories, restoration of erased parts of images, copying facial expressions of humans, and creation of artworks depicting a dead painter's style. Although many outstanding research achievements have been attracting attention recently, GANs have encountered many challenges. First, they require a large memory facility for research. Second, there are still technical limitations in processing high-resolution images over 4K. Third, many GAN learning methods have a problem of instability in the training stage. However, recent research results show images that are difficult to distinguish whether they are real or fake, even with the naked eye, and the resolution of 4K and above is being developed. With the increase in image quality and resolution, many applications in the field of design and image and video editing are now available, including those that draw a photorealistic image as a simple sketch or easily modify unnecessary parts of an image or a video. In this paper, we discuss how GANs started, including the base architecture and latest technologies of GANs used in high-resolution, high-quality image creation, image and video editing, style translation, content transfer, and technology.

A Method for Generating Malware Countermeasure Samples Based on Pixel Attention Mechanism

  • Xiangyu Ma;Yuntao Zhao;Yongxin Feng;Yutao Hu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.2
    • /
    • pp.456-477
    • /
    • 2024
  • With information technology's rapid development, the Internet faces serious security problems. Studies have shown that malware has become a primary means of attacking the Internet. Therefore, adversarial samples have become a vital breakthrough point for studying malware. By studying adversarial samples, we can gain insights into the behavior and characteristics of malware, evaluate the performance of existing detectors in the face of deceptive samples, and help to discover vulnerabilities and improve detection methods for better performance. However, existing adversarial sample generation methods still need help regarding escape effectiveness and mobility. For instance, researchers have attempted to incorporate perturbation methods like Fast Gradient Sign Method (FGSM), Projected Gradient Descent (PGD), and others into adversarial samples to obfuscate detectors. However, these methods are only effective in specific environments and yield limited evasion effectiveness. To solve the above problems, this paper proposes a malware adversarial sample generation method (PixGAN) based on the pixel attention mechanism, which aims to improve adversarial samples' escape effect and mobility. The method transforms malware into grey-scale images and introduces the pixel attention mechanism in the Deep Convolution Generative Adversarial Networks (DCGAN) model to weigh the critical pixels in the grey-scale map, which improves the modeling ability of the generator and discriminator, thus enhancing the escape effect and mobility of the adversarial samples. The escape rate (ASR) is used as an evaluation index of the quality of the adversarial samples. The experimental results show that the adversarial samples generated by PixGAN achieve escape rates of 97%, 94%, 35%, 39%, and 43% on the Random Forest (RF), Support Vector Machine (SVM), Convolutional Neural Network (CNN), Convolutional Neural Network and Recurrent Neural Network (CNN_RNN), and Convolutional Neural Network and Long Short Term Memory (CNN_LSTM) algorithmic detectors, respectively.