• Title/Summary/Keyword: 적대적 공격

Search Result 78, Processing Time 0.036 seconds

Study on Neuron Activities for Adversarial Examples in Convolutional Neural Network Model by Population Sparseness Index (개체군 희소성 인덱스에 의한 컨벌루션 신경망 모델의 적대적 예제에 대한 뉴런 활동에 관한 연구)

  • Youngseok Lee
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.16 no.1
    • /
    • pp.1-7
    • /
    • 2023
  • Convolutional neural networks have already been applied to various fields beyond human visual processing capabilities in the image processing area. However, they are exposed to a severe risk of deteriorating model performance due to the appearance of adversarial attacks. In addition, defense technology to respond to adversarial attacks is effective against the attack but is vulnerable to other types of attacks. Therefore, to respond to an adversarial attack, it is necessary to analyze how the performance of the adversarial attack deteriorates through the process inside the convolutional neural network. In this study, the adversarial attack of the Alexnet and VGG11 models was analyzed using the population sparseness index, a measure of neuronal activity in neurophysiology. Through the research, it was observed in each layer that the population sparsity index for adversarial examples showed differences from that of benign examples.

Perceptual Ad-Blocker Design For Adversarial Attack (적대적 공격에 견고한 Perceptual Ad-Blocker 기법)

  • Kim, Min-jae;Kim, Bo-min;Hur, Junbeom
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.30 no.5
    • /
    • pp.871-879
    • /
    • 2020
  • Perceptual Ad-Blocking is a new advertising blocking technique that detects online advertising by using an artificial intelligence-based advertising image classification model. A recent study has shown that these Perceptual Ad-Blocking models are vulnerable to adversarial attacks using adversarial examples to add noise to images that cause them to be misclassified. In this paper, we prove that existing perceptual Ad-Blocking technique has a weakness for several adversarial example and that Defense-GAN and MagNet who performed well for MNIST dataset and CIFAR-10 dataset are good to advertising dataset. Through this, using Defense-GAN and MagNet techniques, it presents a robust new advertising image classification model for adversarial attacks. According to the results of experiments using various existing adversarial attack techniques, the techniques proposed in this paper were able to secure the accuracy and performance through the robust image classification techniques, and furthermore, they were able to defend a certain level against white-box attacks by attackers who knew the details of defense techniques.

Study on the White Noise effect Against Adversarial Attack for Deep Learning Model for Image Recognition (영상 인식을 위한 딥러닝 모델의 적대적 공격에 대한 백색 잡음 효과에 관한 연구)

  • Lee, Youngseok;Kim, Jongweon
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.15 no.1
    • /
    • pp.27-35
    • /
    • 2022
  • In this paper we propose white noise adding method to prevent missclassification of deep learning system by adversarial attacks. The proposed method is that adding white noise to input image that is benign or adversarial example. The experimental results are showing that the proposed method is robustness to 3 adversarial attacks such as FGSM attack, BIN attack and CW attack. The recognition accuracies of Resnet model with 18, 34, 50 and 101 layers are enhanced when white noise is added to test data set while it does not affect to classification of benign test dataset. The proposed model is applicable to defense to adversarial attacks and replace to time- consuming and high expensive defense method against adversarial attacks such as adversarial training method and deep learning replacing method.

Query-Efficient Black-Box Adversarial Attack Methods on Face Recognition Model (얼굴 인식 모델에 대한 질의 효율적인 블랙박스 적대적 공격 방법)

  • Seo, Seong-gwan;Son, Baehoon;Yun, Joobeom
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.32 no.6
    • /
    • pp.1081-1090
    • /
    • 2022
  • The face recognition model is used for identity recognition of smartphones, providing convenience to many users. As a result, the security review of the DNN model is becoming important, with adversarial attacks present as a well-known vulnerability of the DNN model. Adversarial attacks have evolved to decision-based attack techniques that use only the recognition results of deep learning models to perform attacks. However, existing decision-based attack technique[14] have a problem that requires a large number of queries when generating adversarial examples. In particular, it takes a large number of queries to approximate the gradient. Therefore, in this paper, we propose a method of generating adversarial examples using orthogonal space sampling and dimensionality reduction sampling to avoid wasting queries that are consumed to approximate the gradient of existing decision-based attack technique[14]. Experiments show that our method can reduce the perturbation size of adversarial examples by about 2.4 compared to existing attack technique[14] and increase the attack success rate by 14% compared to existing attack technique[14]. Experimental results demonstrate that the adversarial example generation method proposed in this paper has superior attack performance.

Improving the Robustness of Deepfake Detection Models Against Adversarial Attacks (적대적 공격에 따른 딥페이크 탐지 모델 강화)

  • Lee, Sangyeong;Hou, Jong-Uk
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.11a
    • /
    • pp.724-726
    • /
    • 2022
  • 딥페이크(deepfake)로 인한 디지털 범죄는 날로 교묘해지면서 사회적으로 큰 파장을 불러일으키고 있다. 이때, 딥러닝 기반 모델의 오류를 발생시키는 적대적 공격(adversarial attack)의 등장으로 딥페이크를 탐지하는 모델의 취약성이 증가하고 있고, 이는 매우 치명적인 결과를 초래한다. 본 연구에서는 2 가지 방법을 통해 적대적 공격에도 영향을 받지 않는 강인한(robust) 모델을 구축하는 것을 목표로 한다. 모델 강화 기법인 적대적 학습(adversarial training)과 영상처리 기반 방어 기법인 크기 변환(resizing), JPEG 압축을 통해 적대적 공격에 대한 강인성을 입증한다.

A Study on Adversarial AI Attack and Defense Techniques (적대적 AI 공격 및 방어 기법 연구)

  • Mun, Hyun-Jeong;Oh, Gyu-Tae;Yu, Eun-Seong;Lm, Jeong-yoon;Shin, Jin-Young;Lee, Gyu-Young
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.11a
    • /
    • pp.1022-1024
    • /
    • 2022
  • 최근 인공지능 기술이 급격하게 발전하고 빠르게 보급되면서, 머신러닝 시스템을 대상으로 한 다양한 공격들이 등장하기 시작하였다. 인공지능은 많은 강점이 있지만 인위적인 조작에 취약할 수 있기 때문에, 그만큼 이전에는 존재하지 않았던 새로운 위험을 내포하고 있다고 볼 수 있다. 본 논문에서는 데이터 유형 별 적대적 공격 샘플을 직접 제작하고 이에 대한 효과적인 방어법을 구현하였다. 영상 및 텍스트 데이터를 기반으로 한 적대적 샘플공격을 방어하기 위해 적대적 훈련기법을 적용하였고, 그 결과 공격에 대한 면역능력이 형성된 것을 확인하였다.

A Study on AI Vaccine for the Defense against Adversarial Attack (적대적 공격의 방어를 위한 AI 백신 연구)

  • Song, Chae-Won;Oh, Seung-A;Jeong, Da-Yae;Lim, Yuri;Rho, Eun-Ji;Lee, Gyu-Young
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2021.11a
    • /
    • pp.1132-1135
    • /
    • 2021
  • 본 논문에서는 머신러닝 시스템에 심각한 오류를 발생시킬 수 있는 적대적 샘플을 제작하고, 이를 이용한 적대적 공격을 효과적으로 예방하고 방어할 수 있는 Adversarial Training 기반의 AI 백신을 개발하였으며, 본 논문이 제안하는 AI 백신이 적대적 샘플을 올바르게 인식하고 AI 공격 성공율을 현저하게 낮추는 등 강인성을 확보한 것을 실험을 통해 입증하였다. 아울러 스마트폰을 통해 수행결과를 확인할 수 있는 어플리케이션을 구현하여, 교육 및 시연 등을 통해 적대적 AI 공격에 대한 심각성을 인식하고 해당 방어과정을 명확히 이해할 수 있도록 하였다.

Research Trends of Adversarial Attacks in Image Segmentation (Segmentation 기반 적대적 공격 동향 조사)

  • Hong, Yoon-Young;Shin, Yeong-Jae;Choi, Chang-Woo;Kim, Ho-Won
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.05a
    • /
    • pp.631-634
    • /
    • 2022
  • 컴퓨터 비전에서 딥러닝을 활용한 이미지 분할 기법은 핵심 분야 중 하나이다. 이미지 분할 기법이 다양한 도메인에 사용되면서 딥러닝 네트워크의 오작동을 일으키는 적대적 공격에 대한 방어와 강건함이 요구되고 있으며 자율주행 자동차, 질병 분석과 같이 모델의 보안 취약성이 심각한 사고를 불러 올 수 있는 영역에서 적대적 공격은 많은 관심을 받고 있다. 본 논문에서는 이미지 분할 기법에 따른 구별방법과 최근 연구되고 있는 적대적 공격의 방향성을 설명하며 향후 컴퓨터 비전 분야 연구의 효율성을 위해 중점적으로 검토되고 있는 연구주제를 설명한다

A Substitute Model Learning Method Using Data Augmentation with a Decay Factor and Adversarial Data Generation Using Substitute Model (감쇠 요소가 적용된 데이터 어그멘테이션을 이용한 대체 모델 학습과 적대적 데이터 생성 방법)

  • Min, Jungki;Moon, Jong-sub
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.29 no.6
    • /
    • pp.1383-1392
    • /
    • 2019
  • Adversarial attack, which geneartes adversarial data to make target model misclassify the input data, is able to confuse real life applications of classification models and cause severe damage to the classification system. An Black-box adversarial attack learns a substitute model, which have similar decision boundary to the target model, and then generates adversarial data with the substitute model. Jacobian-based data augmentation is used to synthesize the training data to learn substitutes, but has a drawback that the data synthesized by the augmentation get distorted more and more as the training loop proceeds. We suggest data augmentation with 'decay factor' to alleviate this problem. The result shows that attack success rate of our method is higher(around 8.5%) than the existing method.

An Adversarial Attack Type Classification Method Using Linear Discriminant Analysis and k-means Algorithm (선형 판별 분석 및 k-means 알고리즘을 이용한 적대적 공격 유형 분류 방안)

  • Choi, Seok-Hwan;Kim, Hyeong-Geon;Choi, Yoon-Ho
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.31 no.6
    • /
    • pp.1215-1225
    • /
    • 2021
  • Although Artificial Intelligence (AI) techniques have shown impressive performance in various fields, they are vulnerable to adversarial examples which induce misclassification by adding human-imperceptible perturbations to the input. Previous studies to defend the adversarial examples can be classified into three categories: (1) model retraining methods; (2) input transformation methods; and (3) adversarial examples detection methods. However, even though the defense methods against adversarial examples have constantly been proposed, there is no research to classify the type of adversarial attack. In this paper, we proposed an adversarial attack family classification method based on dimensionality reduction and clustering. Specifically, after extracting adversarial perturbation from adversarial example, we performed Linear Discriminant Analysis (LDA) to reduce the dimensionality of adversarial perturbation and performed K-means algorithm to classify the type of adversarial attack family. From the experimental results using MNIST dataset and CIFAR-10 dataset, we show that the proposed method can efficiently classify five tyeps of adversarial attack(FGSM, BIM, PGD, DeepFool, C&W). We also show that the proposed method provides good classification performance even in a situation where the legitimate input to the adversarial example is unknown.