• Title/Summary/Keyword: FGSM

Search Result 16, Processing Time 0.023 seconds

Research of a Method of Generating an Adversarial Sample Using Grad-CAM (Grad-CAM을 이용한 적대적 예제 생성 기법 연구)

  • Kang, Sehyeok
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.6
    • /
    • pp.878-885
    • /
    • 2022
  • Research in the field of computer vision based on deep learning is being actively conducted. However, deep learning-based models have vulnerabilities in adversarial attacks that increase the model's misclassification rate by applying adversarial perturbation. In particular, in the case of FGSM, it is recognized as one of the effective attack methods because it is simple, fast and has a considerable attack success rate. Meanwhile, as one of the efforts to visualize deep learning models, Grad-CAM enables visual explanation of convolutional neural networks. In this paper, I propose a method to generate adversarial examples with high attack success rate by applying Grad-CAM to FGSM. The method chooses fixels, which are closely related to labels, by using Grad-CAM and add perturbations to the fixels intensively. The proposed method has a higher success rate than the FGSM model in the same perturbation for both targeted and untargeted examples. In addition, unlike FGSM, it has the advantage that the distribution of noise is not uniform, and when the success rate is increased by repeatedly applying noise, the attack is successful with fewer iterations.

Enhanced Production of Galactooligosaccharides Enriched Skim Milk and Applied to Potentially Synbiotic Fermented Milk with Lactobacillus rhamnosus 4B15

  • Oh, Nam Su;Kim, Kyeongmu;Oh, Sangnam;Kim, Younghoon
    • Food Science of Animal Resources
    • /
    • v.39 no.5
    • /
    • pp.725-741
    • /
    • 2019
  • In the current study, we first investigated a method for directly transforming lactose into galacto-oligosaccharides (GOS) for manufacturing low-lactose and GOS-enriched skim milk (GSM) and then evaluated its prebiotic potential by inoculating five strains of Bifidobacterium spp. In addition, fermented GSM (FGSM) was prepared using a potentially probiotic Lactobacillus strain and its fermentation characteristics and antioxidant capacities were determined. We found that GOS in GSM were metabolized by all five Bifidobacterium strains after incubation and promoted their growth. The levels of antioxidant activities including radical scavenging activities and 3-hydroxy-3-methylglutaryl-CoA reductase inhibition rate in GSM were significantly increased by fermentation with the probiotic Lactobacillus strain. Moreover, thirty-nine featured peptides in FGSM was detected. In particular, six peptides derived from ${\beta}$-casein, two peptides originated from ${\alpha}s_1$-casein and ${\kappa}$-casein were newly identified, respectively. Our findings indicate that GSM can potentially be used as a prebiotic substrate and FGSM can potentially prevent oxidative stress during the production of synbiotic fermented milk in the food industry.

Adversarial Training for Grammatical Error Correction (문법 오류 교정을 위한 적대적 학습 방법)

  • Kwon, Soonchoul;Lee, Gary Geunbae
    • Annual Conference on Human and Language Technology
    • /
    • 2020.10a
    • /
    • pp.446-449
    • /
    • 2020
  • 최근 성공적인 문법 오류 교정 연구들에는 복잡한 인공신경망 모델이 사용되고 있다. 그러나 이러한 모델을 훈련할 수 있는 공개 데이터는 필요에 비해 부족하여 과적합 문제를 일으킨다. 이 논문에서는 적대적 훈련 방법을 적용해 문법 오류 교정 분야의 과적합 문제를 해결하는 방법을 탐색한다. 모델의 비용을 증가시키는 경사를 이용한 fast gradient sign method(FGSM)와, 인공신경망을 이용해 모델의 비용을 증가시키기 위한 변동을 학습하는 learned perturbation method(LPM)가 실험되었다. 실험 결과, LPM은 모델 훈련에 효과가 없었으나, FGSM은 적대적 훈련을 사용하지 않은 모델보다 높은 F0.5 성능을 보이는 것이 확인되었다.

  • PDF

A Study on generating adversarial examples (적대적 사례 생성 기법 동향)

  • Oh, Yu-Jin;Kim, Hyun-Ji;Lim, Se-Jin;Seo, Hwa-Jeong
    • Annual Conference of KIPS
    • /
    • 2021.11a
    • /
    • pp.580-583
    • /
    • 2021
  • 인공지능이 발전함에 따라 그에 따른 보안의 중요성이 커지고 있다. 딥러닝을 공격하는 방법 중 하나인 적대적 공격은 적대적 사례를 활용한 공격이다. 이 적대적 사례를 생성하는 대표적인 4가지 기법들에는 기울기 손실함수을 활용하는 FGSM, 네트워크에 쿼리를 반복하여 공격하는 Deepfool, 입력과 결과에 대한 맵을 생성하는 JSMA, 잡음과 원본 데이터의 상관관계에 기반한 공격인 CW 기법이 있다. 이외에도 적대적 사례를 생성하는 다양한 연구들이 진행되고 있다. 그 중에서도 본 논문에서는 FGSM기반의 ABI-FGM, JSMA 기반의 TJSMA, 그 외에 과적합을 줄이는 CIM, DE 알고리즘에 기반한 One pixel 등 최신 적대적 사례 생성 연구에 대해 살펴본다.

Study on the White Noise effect Against Adversarial Attack for Deep Learning Model for Image Recognition (영상 인식을 위한 딥러닝 모델의 적대적 공격에 대한 백색 잡음 효과에 관한 연구)

  • Lee, Youngseok;Kim, Jongweon
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.15 no.1
    • /
    • pp.27-35
    • /
    • 2022
  • In this paper we propose white noise adding method to prevent missclassification of deep learning system by adversarial attacks. The proposed method is that adding white noise to input image that is benign or adversarial example. The experimental results are showing that the proposed method is robustness to 3 adversarial attacks such as FGSM attack, BIN attack and CW attack. The recognition accuracies of Resnet model with 18, 34, 50 and 101 layers are enhanced when white noise is added to test data set while it does not affect to classification of benign test dataset. The proposed model is applicable to defense to adversarial attacks and replace to time- consuming and high expensive defense method against adversarial attacks such as adversarial training method and deep learning replacing method.

A Study on the Efficacy of Edge-Based Adversarial Example Detection Model: Across Various Adversarial Algorithms

  • Jaesung Shim;Kyuri Jo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.2
    • /
    • pp.31-41
    • /
    • 2024
  • Deep learning models show excellent performance in tasks such as image classification and object detection in the field of computer vision, and are used in various ways in actual industrial sites. Recently, research on improving robustness has been actively conducted, along with pointing out that this deep learning model is vulnerable to hostile examples. A hostile example is an image in which small noise is added to induce misclassification, and can pose a significant threat when applying a deep learning model to a real environment. In this paper, we tried to confirm the robustness of the edge-learning classification model and the performance of the adversarial example detection model using it for adversarial examples of various algorithms. As a result of robustness experiments, the basic classification model showed about 17% accuracy for the FGSM algorithm, while the edge-learning models maintained accuracy in the 60-70% range, and the basic classification model showed accuracy in the 0-1% range for the PGD/DeepFool/CW algorithm, while the edge-learning models maintained accuracy in 80-90%. As a result of the adversarial example detection experiment, a high detection rate of 91-95% was confirmed for all algorithms of FGSM/PGD/DeepFool/CW. By presenting the possibility of defending against various hostile algorithms through this study, it is expected to improve the safety and reliability of deep learning models in various industries using computer vision.

Security Vulnerability Verification for Open Deep Learning Libraries (공개 딥러닝 라이브러리에 대한 보안 취약성 검증)

  • Jeong, JaeHan;Shon, Taeshik
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.29 no.1
    • /
    • pp.117-125
    • /
    • 2019
  • Deep Learning, which is being used in various fields recently, is being threatened with Adversarial Attack. In this paper, we experimentally verify that the classification accuracy is lowered by adversarial samples generated by malicious attackers in image classification models. We used MNIST dataset and measured the detection accuracy by injecting adversarial samples into the Autoencoder classification model and the CNN (Convolution neural network) classification model, which are created using the Tensorflow library and the Pytorch library. Adversarial samples were generated by transforming MNIST test dataset with JSMA(Jacobian-based Saliency Map Attack) and FGSM(Fast Gradient Sign Method). When injected into the classification model, detection accuracy decreased by at least 21.82% up to 39.08%.

Comparison of Adversarial Example Restoration Performance of VQ-VAE Model with or without Image Segmentation (이미지 분할 여부에 따른 VQ-VAE 모델의 적대적 예제 복원 성능 비교)

  • Tae-Wook Kim;Seung-Min Hyun;Ellen J. Hong
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.23 no.4
    • /
    • pp.194-199
    • /
    • 2022
  • Preprocessing for high-quality data is required for high accuracy and usability in various and complex image data-based industries. However, when a contaminated hostile example that combines noise with existing image or video data is introduced, which can pose a great risk to the company, it is necessary to restore the previous damage to ensure the company's reliability, security, and complete results. As a countermeasure for this, restoration was previously performed using Defense-GAN, but there were disadvantages such as long learning time and low quality of the restoration. In order to improve this, this paper proposes a method using adversarial examples created through FGSM according to image segmentation in addition to using the VQ-VAE model. First, the generated examples are classified as a general classifier. Next, the unsegmented data is put into the pre-trained VQ-VAE model, restored, and then classified with a classifier. Finally, the data divided into quadrants is put into the 4-split-VQ-VAE model, the reconstructed fragments are combined, and then put into the classifier. Finally, after comparing the restored results and accuracy, the performance is analyzed according to the order of combining the two models according to whether or not they are split.

Improving Adversarial Robustness via Attention (Attention 기법에 기반한 적대적 공격의 강건성 향상 연구)

  • Jaeuk Kim;Myung Gyo Oh;Leo Hyun Park;Taekyoung Kwon
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.4
    • /
    • pp.621-631
    • /
    • 2023
  • Adversarial training improves the robustness of deep neural networks for adversarial examples. However, the previous adversarial training method focuses only on the adversarial loss function, ignoring that even a small perturbation of the input layer causes a significant change in the hidden layer features. Consequently, the accuracy of a defended model is reduced for various untrained situations such as clean samples or other attack techniques. Therefore, an architectural perspective is necessary to improve feature representation power to solve this problem. In this paper, we apply an attention module that generates an attention map of an input image to a general model and performs PGD adversarial training upon the augmented model. In our experiments on the CIFAR-10 dataset, the attention augmented model showed higher accuracy than the general model regardless of the network structure. In particular, the robust accuracy of our approach was consistently higher for various attacks such as PGD, FGSM, and BIM and more powerful adversaries. By visualizing the attention map, we further confirmed that the attention module extracts features of the correct class even for adversarial examples.

Secure Self-Driving Car System Resistant to the Adversarial Evasion Attacks (적대적 회피 공격에 대응하는 안전한 자율주행 자동차 시스템)

  • Seungyeol Lee;Hyunro Lee;Jaecheol Ha
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.6
    • /
    • pp.907-917
    • /
    • 2023
  • Recently, a self-driving car have applied deep learning technology to advanced driver assistance system can provide convenience to drivers, but it is shown deep that learning technology is vulnerable to adversarial evasion attacks. In this paper, we performed five adversarial evasion attacks, including MI-FGSM(Momentum Iterative-Fast Gradient Sign Method), targeting the object detection algorithm YOLOv5 (You Only Look Once), and measured the object detection performance in terms of mAP(mean Average Precision). In particular, we present a method applying morphology operations for YOLO to detect objects normally by removing noise and extracting boundary. As a result of analyzing its performance through experiments, when an adversarial attack was performed, YOLO's mAP dropped by at least 7.9%. The YOLO applied our proposed method can detect objects up to 87.3% of mAP performance.