• Title/Summary/Keyword: Adversarial defense

Search Result 33, Processing Time 0.019 seconds

Perceptual Ad-Blocker Design For Adversarial Attack (적대적 공격에 견고한 Perceptual Ad-Blocker 기법)

  • Kim, Min-jae;Kim, Bo-min;Hur, Junbeom
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.30 no.5
    • /
    • pp.871-879
    • /
    • 2020
  • Perceptual Ad-Blocking is a new advertising blocking technique that detects online advertising by using an artificial intelligence-based advertising image classification model. A recent study has shown that these Perceptual Ad-Blocking models are vulnerable to adversarial attacks using adversarial examples to add noise to images that cause them to be misclassified. In this paper, we prove that existing perceptual Ad-Blocking technique has a weakness for several adversarial example and that Defense-GAN and MagNet who performed well for MNIST dataset and CIFAR-10 dataset are good to advertising dataset. Through this, using Defense-GAN and MagNet techniques, it presents a robust new advertising image classification model for adversarial attacks. According to the results of experiments using various existing adversarial attack techniques, the techniques proposed in this paper were able to secure the accuracy and performance through the robust image classification techniques, and furthermore, they were able to defend a certain level against white-box attacks by attackers who knew the details of defense techniques.

Defending and Detecting Audio Adversarial Example using Frame Offsets

  • Gong, Yongkang;Yan, Diqun;Mao, Terui;Wang, Donghua;Wang, Rangding
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.4
    • /
    • pp.1538-1552
    • /
    • 2021
  • Machine learning models are vulnerable to adversarial examples generated by adding a deliberately designed perturbation to a benign sample. Particularly, for automatic speech recognition (ASR) system, a benign audio which sounds normal could be decoded as a harmful command due to potential adversarial attacks. In this paper, we focus on the countermeasures against audio adversarial examples. By analyzing the characteristics of ASR systems, we find that frame offsets with silence clip appended at the beginning of an audio can degenerate adversarial perturbations to normal noise. For various scenarios, we exploit frame offsets by different strategies such as defending, detecting and hybrid strategy. Compared with the previous methods, our proposed method can defense audio adversarial example in a simpler, more generic and efficient way. Evaluated on three state-of-the-arts adversarial attacks against different ASR systems respectively, the experimental results demonstrate that the proposed method can effectively improve the robustness of ASR systems.

Camouflaged Adversarial Patch Attack on Object Detector (객체탐지 모델에 대한 위장형 적대적 패치 공격)

  • Jeonghun Kim;Hunmin Yang;Se-Yoon Oh
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.26 no.1
    • /
    • pp.44-53
    • /
    • 2023
  • Adversarial attacks have received great attentions for their capacity to distract state-of-the-art neural networks by modifying objects in physical domain. Patch-based attack especially have got much attention for its optimization effectiveness and feasible adaptation to any objects to attack neural network-based object detectors. However, despite their strong attack performance, generated patches are strongly perceptible for humans, violating the fundamental assumption of adversarial examples. In this paper, we propose a camouflaged adversarial patch optimization method using military camouflage assessment metrics for naturalistic patch attacks. We also investigate camouflaged attack loss functions, applications of various camouflaged patches on army tank images, and validate the proposed approach with extensive experiments attacking Yolov5 detection model. Our methods produce more natural and realistic looking camouflaged patches while achieving competitive performance.

Synthetic Image Dataset Generation for Defense using Generative Adversarial Networks (국방용 합성이미지 데이터셋 생성을 위한 대립훈련신경망 기술 적용 연구)

  • Yang, Hunmin
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.22 no.1
    • /
    • pp.49-59
    • /
    • 2019
  • Generative adversarial networks(GANs) have received great attention in the machine learning field for their capacity to model high-dimensional and complex data distribution implicitly and generate new data samples from the model distribution. This paper investigates the model training methodology, architecture, and various applications of generative adversarial networks. Experimental evaluation is also conducted for generating synthetic image dataset for defense using two types of GANs. The first one is for military image generation utilizing the deep convolutional generative adversarial networks(DCGAN). The other is for visible-to-infrared image translation utilizing the cycle-consistent generative adversarial networks(CycleGAN). Each model can yield a great diversity of high-fidelity synthetic images compared to training ones. This result opens up the possibility of using inexpensive synthetic images for training neural networks while avoiding the enormous expense of collecting large amounts of hand-annotated real dataset.

Adversarial Attacks for Deep Learning-Based Infrared Object Detection (딥러닝 기반 적외선 객체 검출을 위한 적대적 공격 기술 연구)

  • Kim, Hoseong;Hyun, Jaeguk;Yoo, Hyunjung;Kim, Chunho;Jeon, Hyunho
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.24 no.6
    • /
    • pp.591-601
    • /
    • 2021
  • Recently, infrared object detection(IOD) has been extensively studied due to the rapid growth of deep neural networks(DNN). Adversarial attacks using imperceptible perturbation can dramatically deteriorate the performance of DNN. However, most adversarial attack works are focused on visible image recognition(VIR), and there are few methods for IOD. We propose deep learning-based adversarial attacks for IOD by expanding several state-of-the-art adversarial attacks for VIR. We effectively validate our claim through comprehensive experiments on two challenging IOD datasets, including FLIR and MSOD.

High Representation based GAN defense for Adversarial Attack

  • Sutanto, Richard Evan;Lee, Suk Ho
    • International journal of advanced smart convergence
    • /
    • v.8 no.1
    • /
    • pp.141-146
    • /
    • 2019
  • These days, there are many applications using neural networks as parts of their system. On the other hand, adversarial examples have become an important issue concerining the security of neural networks. A classifier in neural networks can be fooled and make it miss-classified by adversarial examples. There are many research to encounter adversarial examples by using denoising methods. Some of them using GAN (Generative Adversarial Network) in order to remove adversarial noise from input images. By producing an image from generator network that is close enough to the original clean image, the adversarial examples effects can be reduced. However, there is a chance when adversarial noise can survive the approximation process because it is not like a normal noise. In this chance, we propose a research that utilizes high-level representation in the classifier by combining GAN network with a trained U-Net network. This approach focuses on minimizing the loss function on high representation terms, in order to minimize the difference between the high representation level of the clean data and the approximated output of the noisy data in the training dataset. Furthermore, the generated output is checked whether it shows minimum error compared to true label or not. U-Net network is trained with true label to make sure the generated output gives minimum error in the end. At last, the remaining adversarial noise that still exist after low-level approximation can be removed with the U-Net, because of the minimization on high representation terms.

BM3D and Deep Image Prior based Denoising for the Defense against Adversarial Attacks on Malware Detection Networks

  • Sandra, Kumi;Lee, Suk-Ho
    • International journal of advanced smart convergence
    • /
    • v.10 no.3
    • /
    • pp.163-171
    • /
    • 2021
  • Recently, Machine Learning-based visualization approaches have been proposed to combat the problem of malware detection. Unfortunately, these techniques are exposed to Adversarial examples. Adversarial examples are noises which can deceive the deep learning based malware detection network such that the malware becomes unrecognizable. To address the shortcomings of these approaches, we present Block-matching and 3D filtering (BM3D) algorithm and deep image prior based denoising technique to defend against adversarial examples on visualization-based malware detection systems. The BM3D based denoising method eliminates most of the adversarial noise. After that the deep image prior based denoising removes the remaining subtle noise. Experimental results on the MS BIG malware dataset and benign samples show that the proposed denoising based defense recovers the performance of the adversarial attacked CNN model for malware detection to some extent.

StarGAN-Based Detection and Purification Studies to Defend against Adversarial Attacks (적대적 공격을 방어하기 위한 StarGAN 기반의 탐지 및 정화 연구)

  • Sungjune Park;Gwonsang Ryu;Daeseon Choi
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.3
    • /
    • pp.449-458
    • /
    • 2023
  • Artificial Intelligence is providing convenience in various fields using big data and deep learning technologies. However, deep learning technology is highly vulnerable to adversarial examples, which can cause misclassification of classification models. This study proposes a method to detect and purification various adversarial attacks using StarGAN. The proposed method trains a StarGAN model with added Categorical Entropy loss using adversarial examples generated by various attack methods to enable the Discriminator to detect adversarial examples and the Generator to purification them. Experimental results using the CIFAR-10 dataset showed an average detection performance of approximately 68.77%, an average purification performance of approximately 72.20%, and an average defense performance of approximately 93.11% derived from restoration and detection performance.

Adversarial Attacks and Defense Strategy in Deep Learning

  • Sarala D.V;Thippeswamy Gangappa
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.1
    • /
    • pp.127-132
    • /
    • 2024
  • With the rapid evolution of the Internet, the application of artificial intelligence fields is more and more extensive, and the era of AI has come. At the same time, adversarial attacks in the AI field are also frequent. Therefore, the research into adversarial attack security is extremely urgent. An increasing number of researchers are working in this field. We provide a comprehensive review of the theories and methods that enable researchers to enter the field of adversarial attack. This article is according to the "Why? → What? → How?" research line for elaboration. Firstly, we explain the significance of adversarial attack. Then, we introduce the concepts, types, and hazards of adversarial attack. Finally, we review the typical attack algorithms and defense techniques in each application area. Facing the increasingly complex neural network model, this paper focuses on the fields of image, text, and malicious code and focuses on the adversarial attack classifications and methods of these three data types, so that researchers can quickly find their own type of study. At the end of this review, we also raised some discussions and open issues and compared them with other similar reviews.

A Research on Adversarial Example-based Passive Air Defense Method against Object Detectable AI Drone (객체인식 AI적용 드론에 대응할 수 있는 적대적 예제 기반 소극방공 기법 연구)

  • Simun Yuk;Hweerang Park;Taisuk Suh;Youngho Cho
    • Journal of Internet Computing and Services
    • /
    • v.24 no.6
    • /
    • pp.119-125
    • /
    • 2023
  • Through the Ukraine-Russia war, the military importance of drones is being reassessed, and North Korea has completed actual verification through a drone provocation towards South Korea at 2022. Furthermore, North Korea is actively integrating artificial intelligence (AI) technology into drones, highlighting the increasing threat posed by drones. In response, the Republic of Korea military has established Drone Operations Command(DOC) and implemented various drone defense systems. However, there is a concern that the efforts to enhance capabilities are disproportionately focused on striking systems, making it challenging to effectively counter swarm drone attacks. Particularly, Air Force bases located adjacent to urban areas face significant limitations in the use of traditional air defense weapons due to concerns about civilian casualties. Therefore, this study proposes a new passive air defense method that aims at disrupting the object detection capabilities of AI models to enhance the survivability of friendly aircraft against the threat posed by AI based swarm drones. Using laser-based adversarial examples, the study seeks to degrade the recognition accuracy of object recognition AI installed on enemy drones. Experimental results using synthetic images and precision-reduced models confirmed that the proposed method decreased the recognition accuracy of object recognition AI, which was initially approximately 95%, to around 0-15% after the application of the proposed method, thereby validating the effectiveness of the proposed method.