• Title/Summary/Keyword: Adversarial Attack

Search Result 62, Processing Time 0.029 seconds

Empirical Study on Correlation between Performance and PSI According to Adversarial Attacks for Convolutional Neural Networks (컨벌루션 신경망 모델의 적대적 공격에 따른 성능과 개체군 희소 지표의 상관성에 관한 경험적 연구)

  • Youngseok Lee
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.17 no.2
    • /
    • pp.113-120
    • /
    • 2024
  • The population sparseness index(PSI) is being utilized to describe the functioning of internal layers in artificial neural networks from the perspective of neurons, shedding light on the black-box nature of the network's internal operations. There is research indicating a positive correlation between the PSI and performance in each layer of convolutional neural network models for image classification. In this study, we observed the internal operations of a convolutional neural network when adversarial examples were applied. The results of the experiments revealed a similar pattern of positive correlation for adversarial examples, which were modified to maintain 5% accuracy compared to applying benign data. Thus, while there may be differences in each adversarial attack, the observed PSI for adversarial examples demonstrated consistent positive correlations with benign data across layers.

AI Security Vulnerabilities in Fully Unmanned Stores: Adversarial Patch Attacks on Object Detection Model & Analysis of the Defense Effectiveness of Data Augmentation (완전 무인 매장의 AI 보안 취약점: 객체 검출 모델에 대한 Adversarial Patch 공격 및 Data Augmentation의 방어 효과성 분석)

  • Won-ho Lee;Hyun-sik Na;So-hee Park;Dae-seon Choi
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.34 no.2
    • /
    • pp.245-261
    • /
    • 2024
  • The COVID-19 pandemic has led to the widespread adoption of contactless transactions, resulting in a noticeable increase in the trend towards fully unmanned stores. In such stores, all operational processes are automated, primarily using artificial intelligence (AI) technology. However, this AI technology has several security vulnerabilities, which can be critical in the environment of fully unmanned stores. This paper analyzes the security vulnerabilities that AI-based fully unmanned stores may face, focusing particularly on the object detection model YOLO, demonstrating that Hiding Attacks and Altering Attacks using adversarial patches are possible. It is confirmed that objects with adversarial patches attached may not be recognized by the detection model or may be incorrectly recognized as other objects. Furthermore, the paper analyzes how Data Augmentation techniques can mitigate security threats by providing a defensive effect against adversarial patch attacks. Based on these results, we emphasize the need for proactive research into defensive measures to address the inherent security threats in AI technology used in fully unmanned stores.

Improving Adversarial Robustness via Attention (Attention 기법에 기반한 적대적 공격의 강건성 향상 연구)

  • Jaeuk Kim;Myung Gyo Oh;Leo Hyun Park;Taekyoung Kwon
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.4
    • /
    • pp.621-631
    • /
    • 2023
  • Adversarial training improves the robustness of deep neural networks for adversarial examples. However, the previous adversarial training method focuses only on the adversarial loss function, ignoring that even a small perturbation of the input layer causes a significant change in the hidden layer features. Consequently, the accuracy of a defended model is reduced for various untrained situations such as clean samples or other attack techniques. Therefore, an architectural perspective is necessary to improve feature representation power to solve this problem. In this paper, we apply an attention module that generates an attention map of an input image to a general model and performs PGD adversarial training upon the augmented model. In our experiments on the CIFAR-10 dataset, the attention augmented model showed higher accuracy than the general model regardless of the network structure. In particular, the robust accuracy of our approach was consistently higher for various attacks such as PGD, FGSM, and BIM and more powerful adversaries. By visualizing the attention map, we further confirmed that the attention module extracts features of the correct class even for adversarial examples.

Random Noise Addition for Detecting Adversarially Generated Image Dataset (임의의 잡음 신호 추가를 활용한 적대적으로 생성된 이미지 데이터셋 탐지 방안에 대한 연구)

  • Hwang, Jeonghwan;Yoon, Ji Won
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.12 no.6
    • /
    • pp.629-635
    • /
    • 2019
  • In Deep Learning models derivative is implemented by error back-propagation which enables the model to learn the error and update parameters. It can find the global (or local) optimal points of parameters even in the complex models taking advantage of a huge improvement in computing power. However, deliberately generated data points can 'fool' models and degrade the performance such as prediction accuracy. Not only these adversarial examples reduce the performance but also these examples are not easily detectable with human's eyes. In this work, we propose the method to detect adversarial datasets with random noise addition. We exploit the fact that when random noise is added, prediction accuracy of non-adversarial dataset remains almost unchanged, but that of adversarial dataset changes. We set attack methods (FGSM, Saliency Map) and noise level (0-19 with max pixel value 255) as independent variables and difference of prediction accuracy when noise was added as dependent variable in a simulation experiment. We have succeeded in extracting the threshold that separates non-adversarial and adversarial dataset. We detected the adversarial dataset using this threshold.

A Study Adversarial machine learning attacks and defenses (적대적 머신러닝 공격과 방어기법)

  • jemin Lee;Jae-Kyung Park
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2023.07a
    • /
    • pp.621-623
    • /
    • 2023
  • 본 논문에서는 기계 학습 모델의 취약점과 대응책에 초점을 맞추어 적대적인 기계 학습 공격 및 방어 분야를 탐구한다. 신중하게 만들어진 입력 데이터를 도입하여 기계 학습 모델을 속이거나 조작하는 것을 목표로 하는 적대적 공격에 대한 심층 분석을 제공한다. 이 논문은 회피 및 독성 공격을 포함한 다양한 유형의 적대적 공격을 조사하고 기계 학습 시스템의 안정성과 보안에 대한 잠재적 영향을 조사한다. 또한 적대적 공격에 대한 기계 학습 모델의 견고성을 향상시키기 위해 다양한 방어 메커니즘과 전략을 제안하고 평가한다. 본 논문은 광범위한 실험과 분석을 통해 적대적 기계 학습에 대한 이해에 기여하고 효과적인 방어 기술에 대한 통찰력을 제공하는 것을 목표로 한다.

  • PDF

GAN 기반 은닉 적대적 패치 생성 기법에 관한 연구

  • Kim, Yongsu;Kang, Hyoeun;Kim, Howon
    • Review of KIISC
    • /
    • v.30 no.5
    • /
    • pp.71-77
    • /
    • 2020
  • 딥러닝 기술은 이미지 분류 문제에 뛰어난 성능을 보여주지만, 공격자가 입력 데이터를 조작하여 의도적으로 오작동을 일으키는 적대적 공격(adversarial attack)에 취약하다. 최근 이미지에 직접 스티커를 부착하는 형태로 딥러닝 모델의 오작동을 일으키는 적대적 패치(adversarial patch)에 관한 연구가 활발히 진행되고 있다. 하지만 기존의 적대적 패치는 대부분 눈에 잘 띄기 때문에 실제 공격을 받은 상황에서 쉽게 식별하여 대응할 수 있다는 단점이 있다. 본 연구에서는 GAN(Generative Adversarial Networks)을 이용하여 식별하기 어려운 적대적 패치를 생성하는 기법을 제안한다. 실험을 통해 제안하는 방법으로 생성한 적대적 패치를 이미지에 부착하여 기존 이미지와의 구조적 유사도를 확인하고 이미지 분류모델에 대한 공격 성능을 분석한다.

Generating adversarial examples on toxic comment detection (악성 댓글 탐지기에 대한 대항 예제 생성)

  • Son, Soohyun;Lee, Sangkyun
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2019.10a
    • /
    • pp.795-797
    • /
    • 2019
  • In this paper, we propose a method to generate adversarial examples for toxicity detection neural networks. Our dataset is represented by a one-hot vector and we constrain that only one character is allowed to be modified. The location to be changed is founded by the maximum area of input gradient, which represents the most affecting character the model to make decisions. Despite the fact that we have strong constraint compared to the image-based adversarial attack, we have achieved about 49% successful rate.

Pruning for Robustness by Suppressing High Magnitude and Increasing Sparsity of Weights

  • Cho, Incheon;Ali, Muhammad Salman;Bae, Sung-Ho
    • Journal of Broadcast Engineering
    • /
    • v.26 no.7
    • /
    • pp.862-867
    • /
    • 2021
  • Although Deep Neural Networks (DNNs) have shown remarkable performance in various artificial intelligence fields, it is well known that DNNs are vulnerable to adversarial attacks. Since adversarial attacks are implemented by adding perturbations onto benign examples, increasing the sparsity of DNNs minimizes the propagation of errors to high-level layers. In this paper, unlike the traditional pruning scheme removing low magnitude weights, we eliminate high magnitude weights that are usually considered high absolute values, named 'reverse pruning' to ensure robustness. By conducting both theoretical and experimental analyses, we observe that reverse pruning ensures the robustness of DNNs. Experimental results show that our reverse pruning outperforms previous work with 29.01% in Top-1 accuracy on perturbed CIFAR-10. However, reverse pruning does not guarantee benign samples. To relax this problem, we further conducted experiments by adding a regularization term for the high magnitude weights. With adding the regularization term, we also applied conventional pruning to ensure the robustness of DNNs.

Improving the Robustness of Deepfake Detection Models Against Adversarial Attacks (적대적 공격에 따른 딥페이크 탐지 모델 강화)

  • Lee, Sangyeong;Hou, Jong-Uk
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.11a
    • /
    • pp.724-726
    • /
    • 2022
  • 딥페이크(deepfake)로 인한 디지털 범죄는 날로 교묘해지면서 사회적으로 큰 파장을 불러일으키고 있다. 이때, 딥러닝 기반 모델의 오류를 발생시키는 적대적 공격(adversarial attack)의 등장으로 딥페이크를 탐지하는 모델의 취약성이 증가하고 있고, 이는 매우 치명적인 결과를 초래한다. 본 연구에서는 2 가지 방법을 통해 적대적 공격에도 영향을 받지 않는 강인한(robust) 모델을 구축하는 것을 목표로 한다. 모델 강화 기법인 적대적 학습(adversarial training)과 영상처리 기반 방어 기법인 크기 변환(resizing), JPEG 압축을 통해 적대적 공격에 대한 강인성을 입증한다.

Efficient Poisoning Attack Defense Techniques Based on Data Augmentation (데이터 증강 기반의 효율적인 포이즈닝 공격 방어 기법)

  • So-Eun Jeon;Ji-Won Ock;Min-Jeong Kim;Sa-Ra Hong;Sae-Rom Park;Il-Gu Lee
    • Convergence Security Journal
    • /
    • v.22 no.3
    • /
    • pp.25-32
    • /
    • 2022
  • Recently, the image processing industry has been activated as deep learning-based technology is introduced in the image recognition and detection field. With the development of deep learning technology, learning model vulnerabilities for adversarial attacks continue to be reported. However, studies on countermeasures against poisoning attacks that inject malicious data during learning are insufficient. The conventional countermeasure against poisoning attacks has a limitation in that it is necessary to perform a separate detection and removal operation by examining the training data each time. Therefore, in this paper, we propose a technique for reducing the attack success rate by applying modifications to the training data and inference data without a separate detection and removal process for the poison data. The One-shot kill poison attack, a clean label poison attack proposed in previous studies, was used as an attack model. The attack performance was confirmed by dividing it into a general attacker and an intelligent attacker according to the attacker's attack strategy. According to the experimental results, when the proposed defense mechanism is applied, the attack success rate can be reduced by up to 65% compared to the conventional method.