• Title/Summary/Keyword: Adversarial learning

Search Result 269, Processing Time 0.021 seconds

Rapid Misclassification Sample Generation Attack on Deep Neural Network (딥뉴럴네트워크 상에 신속한 오인식 샘플 생성 공격)

  • Kwon, Hyun;Park, Sangjun;Kim, Yongchul
    • Convergence Security Journal
    • /
    • v.20 no.2
    • /
    • pp.111-121
    • /
    • 2020
  • Deep neural networks (DNNs) provide good performance for machine learning tasks such as image recognition and object recognition. However, DNNs are vulnerable to an adversarial example. An adversarial example is an attack sample that causes the neural network to recognize it incorrectly by adding minimal noise to the original sample. However, the disadvantage is that it takes a long time to generate such an adversarial example. Therefore, in some cases, an attack may be necessary that quickly causes the neural network to recognize it incorrectly. In this paper, we propose a fast misclassification sample that can rapidly attack neural networks. The proposed method does not consider the distortion of the original sample when adding noise. We used MNIST and CIFAR10 as experimental data and Tensorflow as a machine learning library. Experimental results show that the fast misclassification sample generated by the proposed method can be generated with 50% and 80% reduced number of iterations for MNIST and CIFAR10, respectively, compared to the conventional Carlini method, and has 100% attack rate.

Addressing Emerging Threats: An Analysis of AI Adversarial Attacks and Security Implications

  • HoonJae Lee;ByungGook Lee
    • International journal of advanced smart convergence
    • /
    • v.13 no.2
    • /
    • pp.69-79
    • /
    • 2024
  • AI technology is a central focus of the 4th Industrial Revolution. However, compared to some existing non-artificial intelligence technologies, new AI adversarial attacks have become possible in learning data management, input data management, and other areas. These attacks, which exploit weaknesses in AI encryption technology, are not only emerging as social issues but are also expected to have a significant negative impact on existing IT and convergence industries. This paper examines various cases of AI adversarial attacks developed recently, categorizes them into five groups, and provides a foundational document for developing security guidelines to verify their safety. The findings of this study confirm AI adversarial attacks that can be applied to various types of cryptographic modules (such as hardware cryptographic modules, software cryptographic modules, firmware cryptographic modules, hybrid software cryptographic modules, hybrid firmware cryptographic modules, etc.) incorporating AI technology. The aim is to offer a foundational document for the development of standardized protocols, believed to play a crucial role in rejuvenating the information security industry in the future.

Constrained adversarial loss for generative adversarial network-based faithful image restoration

  • Kim, Dong-Wook;Chung, Jae-Ryun;Kim, Jongho;Lee, Dae Yeol;Jeong, Se Yoon;Jung, Seung-Won
    • ETRI Journal
    • /
    • v.41 no.4
    • /
    • pp.415-425
    • /
    • 2019
  • Generative adversarial networks (GAN) have been successfully used in many image restoration tasks, including image denoising, super-resolution, and compression artifact reduction. By fully exploiting its characteristics, state-of-the-art image restoration techniques can be used to generate images with photorealistic details. However, there are many applications that require faithful rather than visually appealing image reconstruction, such as medical imaging, surveillance, and video coding. We found that previous GAN-training methods that used a loss function in the form of a weighted sum of fidelity and adversarial loss fails to reduce fidelity loss. This results in non-negligible degradation of the objective image quality, including peak signal-to-noise ratio. Our approach is to alternate between fidelity and adversarial loss in a way that the minimization of adversarial loss does not deteriorate the fidelity. Experimental results on compression-artifact reduction and super-resolution tasks show that the proposed method can perform faithful and photorealistic image restoration.

GAN-based Color Palette Extraction System by Chroma Fine-tuning with Reinforcement Learning

  • Kim, Sanghyuk;Kang, Suk-Ju
    • Journal of Semiconductor Engineering
    • /
    • v.2 no.1
    • /
    • pp.125-129
    • /
    • 2021
  • As the interest of deep learning, techniques to control the color of images in image processing field are evolving together. However, there is no clear standard for color, and it is not easy to find a way to represent only the color itself like the color-palette. In this paper, we propose a novel color palette extraction system by chroma fine-tuning with reinforcement learning. It helps to recognize the color combination to represent an input image. First, we use RGBY images to create feature maps by transferring the backbone network with well-trained model-weight which is verified at super resolution convolutional neural networks. Second, feature maps are trained to 3 fully connected layers for the color-palette generation with a generative adversarial network (GAN). Third, we use the reinforcement learning method which only changes chroma information of the GAN-output by slightly moving each Y component of YCbCr color gamut of pixel values up and down. The proposed method outperforms existing color palette extraction methods as given the accuracy of 0.9140.

Adversarial Shade Generation and Training Text Recognition Algorithm that is Robust to Text in Brightness (밝기 변화에 강인한 적대적 음영 생성 및 훈련 글자 인식 알고리즘)

  • Seo, Minseok;Kim, Daehan;Choi, Dong-Geol
    • The Journal of Korea Robotics Society
    • /
    • v.16 no.3
    • /
    • pp.276-282
    • /
    • 2021
  • The system for recognizing text in natural scenes has been applied in various industries. However, due to the change in brightness that occurs in nature such as light reflection and shadow, the text recognition performance significantly decreases. To solve this problem, we propose an adversarial shadow generation and training algorithm that is robust to shadow changes. The adversarial shadow generation and training algorithm divides the entire image into a total of 9 grids, and adjusts the brightness with 4 trainable parameters for each grid. Finally, training is conducted in a adversarial relationship between the text recognition model and the shaded image generator. As the training progresses, more and more difficult shaded grid combinations occur. When training with this curriculum-learning attitude, we not only showed a performance improvement of more than 3% in the ICDAR2015 public benchmark dataset, but also confirmed that the performance improved when applied to our's android application text recognition dataset.

Deep Learning-based Single Image Generative Adversarial Network: Performance Comparison and Trends (딥러닝 기반 단일 이미지 생성적 적대 신경망 기법 비교 분석)

  • Jeong, Seong-Hun;Kong, Kyeongbo
    • Journal of Broadcast Engineering
    • /
    • v.27 no.3
    • /
    • pp.437-450
    • /
    • 2022
  • Generative adversarial networks(GANs) have demonstrated remarkable success in image synthesis. However, since GANs show instability in the training stage on large datasets, it is difficult to apply to various application fields. A single image GAN is a field that generates various images by learning the internal distribution of a single image. In this paper, we investigate five Single Image GAN: SinGAN, ConSinGAN, InGAN, DeepSIM, and One-Shot GAN. We compare the performance of each model and analyze the pros and cons of a single image GAN.

A Novel Broadband Channel Estimation Technique Based on Dual-Module QGAN

  • Li Ting;Zhang Jinbiao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.5
    • /
    • pp.1369-1389
    • /
    • 2024
  • In the era of 6G, the rapid increase in communication data volume poses higher demands on traditional channel estimation techniques and those based on deep learning, especially when processing large-scale data as their computational load and real-time performance often fail to meet practical requirements. To overcome this bottleneck, this paper introduces quantum computing techniques, exploring for the first time the application of Quantum Generative Adversarial Networks (QGAN) to broadband channel estimation challenges. Although generative adversarial technology has been applied to channel estimation, obtaining instantaneous channel information remains a significant challenge. To address the issue of instantaneous channel estimation, this paper proposes an innovative QGAN with a dual-module design in the generator. The adversarial loss function and the Mean Squared Error (MSE) loss function are separately applied for the parameter updates of these two modules, facilitating the learning of statistical channel information and the generation of instantaneous channel details. Experimental results demonstrate the efficiency and accuracy of the proposed dual-module QGAN technique in channel estimation on the Pennylane quantum computing simulation platform. This research opens a new direction for physical layer techniques in wireless communication and offers expanded possibilities for the future development of wireless communication technologies.

A Substitute Model Learning Method Using Data Augmentation with a Decay Factor and Adversarial Data Generation Using Substitute Model (감쇠 요소가 적용된 데이터 어그멘테이션을 이용한 대체 모델 학습과 적대적 데이터 생성 방법)

  • Min, Jungki;Moon, Jong-sub
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.29 no.6
    • /
    • pp.1383-1392
    • /
    • 2019
  • Adversarial attack, which geneartes adversarial data to make target model misclassify the input data, is able to confuse real life applications of classification models and cause severe damage to the classification system. An Black-box adversarial attack learns a substitute model, which have similar decision boundary to the target model, and then generates adversarial data with the substitute model. Jacobian-based data augmentation is used to synthesize the training data to learn substitutes, but has a drawback that the data synthesized by the augmentation get distorted more and more as the training loop proceeds. We suggest data augmentation with 'decay factor' to alleviate this problem. The result shows that attack success rate of our method is higher(around 8.5%) than the existing method.

Generating Audio Adversarial Examples Using a Query-Efficient Decision-Based Attack (질의 효율적인 의사 결정 공격을 통한 오디오 적대적 예제 생성 연구)

  • Seo, Seong-gwan;Mun, Hyunjun;Son, Baehoon;Yun, Joobeom
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.32 no.1
    • /
    • pp.89-98
    • /
    • 2022
  • As deep learning technology was applied to various fields, research on adversarial attack techniques, a security problem of deep learning models, was actively studied. adversarial attacks have been mainly studied in the field of images. Recently, they have even developed a complete decision-based attack technique that can attack with just the classification results of the model. However, in the case of the audio field, research is relatively slow. In this paper, we applied several decision-based attack techniques to the audio field and improved state-of-the-art attack techniques. State-of-the-art decision-attack techniques have the disadvantage of requiring many queries for gradient approximation. In this paper, we improve query efficiency by proposing a method of reducing the vector search space required for gradient approximation. Experimental results showed that the attack success rate was increased by 50%, and the difference between original audio and adversarial examples was reduced by 75%, proving that our method could generate adversarial examples with smaller noise.

Secure Self-Driving Car System Resistant to the Adversarial Evasion Attacks (적대적 회피 공격에 대응하는 안전한 자율주행 자동차 시스템)

  • Seungyeol Lee;Hyunro Lee;Jaecheol Ha
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.6
    • /
    • pp.907-917
    • /
    • 2023
  • Recently, a self-driving car have applied deep learning technology to advanced driver assistance system can provide convenience to drivers, but it is shown deep that learning technology is vulnerable to adversarial evasion attacks. In this paper, we performed five adversarial evasion attacks, including MI-FGSM(Momentum Iterative-Fast Gradient Sign Method), targeting the object detection algorithm YOLOv5 (You Only Look Once), and measured the object detection performance in terms of mAP(mean Average Precision). In particular, we present a method applying morphology operations for YOLO to detect objects normally by removing noise and extracting boundary. As a result of analyzing its performance through experiments, when an adversarial attack was performed, YOLO's mAP dropped by at least 7.9%. The YOLO applied our proposed method can detect objects up to 87.3% of mAP performance.