• 제목/요약/키워드: Adversarial Machine Learning

검색결과 55건 처리시간 0.021초

Adversarial Machine Learning: A Survey on the Influence Axis

  • Alzahrani, Shahad;Almalki, Taghreed;Alsuwat, Hatim;Alsuwat, Emad
    • International Journal of Computer Science & Network Security
    • /
    • 제22권5호
    • /
    • pp.193-203
    • /
    • 2022
  • After the everyday use of systems and applications of artificial intelligence in our world. Consequently, machine learning technologies have become characterized by exceptional capabilities and unique and distinguished performance in many areas. However, these applications and systems are vulnerable to adversaries who can be a reason to confer the wrong classification by introducing distorted samples. Precisely, it has been perceived that adversarial examples designed throughout the training and test phases can include industrious Ruin the performance of the machine learning. This paper provides a comprehensive review of the recent research on adversarial machine learning. It's also worth noting that the paper only examines recent techniques that were released between 2018 and 2021. The diverse systems models have been investigated and discussed regarding the type of attacks, and some possible security suggestions for these attacks to highlight the risks of adversarial machine learning.

Intrusion Detection System을 회피하고 Physical Attack을 하기 위한 GAN 기반 적대적 CAN 프레임 생성방법 (GAN Based Adversarial CAN Frame Generation Method for Physical Attack Evading Intrusion Detection System)

  • 김도완;최대선
    • 정보보호학회논문지
    • /
    • 제31권6호
    • /
    • pp.1279-1290
    • /
    • 2021
  • 차량 기술이 성장하면서 운전자의 개입이 필요 없는 자율주행까지 발전하였고, 이에 따라 차량 내부 네트워크인 CAN 보안도 중요해졌다. CAN은 해킹 공격에 취약점을 보이는데, 이러한 공격을 탐지하기 위해 기계학습 기반 IDS가 도입된다. 하지만 기계학습은 높은 정확도에도 불구하고 적대적 예제에 취약한 모습을 보여주었다. 본 논문에서는 IDS를 회피할 수 있도록 feature에 잡음을 추가하고 또한 실제 차량의 physical attack을 위한 feature 선택 및 패킷화를 진행하여 IDS를 회피하고 실제 차량에도 공격할 수 있도록 적대적 CAN frame 생성방법을 제안한다. 모든 feature 변조 실험부터 feature 선택 후 변조 실험, 패킷화 이후 전처리하여 IDS 회피실험을 진행하여 생성한 적대적 CAN frame이 IDS를 얼마나 회피하는지 확인한다.

BM3D and Deep Image Prior based Denoising for the Defense against Adversarial Attacks on Malware Detection Networks

  • Sandra, Kumi;Lee, Suk-Ho
    • International journal of advanced smart convergence
    • /
    • 제10권3호
    • /
    • pp.163-171
    • /
    • 2021
  • Recently, Machine Learning-based visualization approaches have been proposed to combat the problem of malware detection. Unfortunately, these techniques are exposed to Adversarial examples. Adversarial examples are noises which can deceive the deep learning based malware detection network such that the malware becomes unrecognizable. To address the shortcomings of these approaches, we present Block-matching and 3D filtering (BM3D) algorithm and deep image prior based denoising technique to defend against adversarial examples on visualization-based malware detection systems. The BM3D based denoising method eliminates most of the adversarial noise. After that the deep image prior based denoising removes the remaining subtle noise. Experimental results on the MS BIG malware dataset and benign samples show that the proposed denoising based defense recovers the performance of the adversarial attacked CNN model for malware detection to some extent.

국방용 합성이미지 데이터셋 생성을 위한 대립훈련신경망 기술 적용 연구 (Synthetic Image Dataset Generation for Defense using Generative Adversarial Networks)

  • 양훈민
    • 한국군사과학기술학회지
    • /
    • 제22권1호
    • /
    • pp.49-59
    • /
    • 2019
  • Generative adversarial networks(GANs) have received great attention in the machine learning field for their capacity to model high-dimensional and complex data distribution implicitly and generate new data samples from the model distribution. This paper investigates the model training methodology, architecture, and various applications of generative adversarial networks. Experimental evaluation is also conducted for generating synthetic image dataset for defense using two types of GANs. The first one is for military image generation utilizing the deep convolutional generative adversarial networks(DCGAN). The other is for visible-to-infrared image translation utilizing the cycle-consistent generative adversarial networks(CycleGAN). Each model can yield a great diversity of high-fidelity synthetic images compared to training ones. This result opens up the possibility of using inexpensive synthetic images for training neural networks while avoiding the enormous expense of collecting large amounts of hand-annotated real dataset.

위상 최적화를 위한 생산적 적대 신경망 기반 데이터 증강 기법 (GAN-based Data Augmentation methods for Topology Optimization)

  • 이승혜;이유진;이기학;이재홍
    • 한국공간구조학회논문집
    • /
    • 제21권4호
    • /
    • pp.39-48
    • /
    • 2021
  • In this paper, a GAN-based data augmentation method is proposed for topology optimization. In machine learning techniques, a total amount of dataset determines the accuracy and robustness of the trained neural network architectures, especially, supervised learning networks. Because the insufficient data tends to lead to overfitting or underfitting of the architectures, a data augmentation method is need to increase the amount of data for reducing overfitting when training a machine learning model. In this study, the Ganerative Adversarial Network (GAN) is used to augment the topology optimization dataset. The produced dataset has been compared with the original dataset.

딥뉴럴네트워크 상에 신속한 오인식 샘플 생성 공격 (Rapid Misclassification Sample Generation Attack on Deep Neural Network)

  • 권현;박상준;김용철
    • 융합보안논문지
    • /
    • 제20권2호
    • /
    • pp.111-121
    • /
    • 2020
  • 딥뉴럴네트워크는 머신러닝 분야 중 이미지 인식, 사물 인식 등에 좋은 성능을 보여주고 있다. 그러나 딥뉴럴네트워크는 적대적 샘플(Adversarial example)에 취약점이 있다. 적대적 샘플은 원본 샘플에 최소한의 noise를 넣어서 딥뉴럴네트워크가 잘못 인식하게 하는 샘플이다. 그러나 이러한 적대적 샘플은 원본 샘플간의 최소한의 noise을 주면서 동시에 딥뉴럴네트워크가 잘못 인식하도록 하는 샘플을 생성하는 데 시간이 많이 걸린다는 단점이 있다. 따라서 어떠한 경우에 최소한의 noise가 아니더라도 신속하게 딥뉴럴네트워크가 잘못 인식하도록 하는 공격이 필요할 수 있다. 이 논문에서, 우리는 신속하게 딥뉴럴네트워크를 공격하는 것에 우선순위를 둔 신속한 오인식 샘플 생성 공격을 제안하고자 한다. 이 제안방법은 원본 샘플에 대한 왜곡을 고려하지 않고 딥뉴럴네트워크의 오인식에 중점을 둔 noise를 추가하는 방식이다. 따라서 이 방법은 기존방법과 달리 별도의 원본 샘플에 대한 왜곡을 고려하지 않기 때문에 기존방법보다 생성속도가 빠른 장점이 있다. 실험데이터로는 MNIST와 CIFAR10를 사용하였으며 머신러닝 라이브러리로 Tensorflow를 사용하였다. 실험결과에서, 제안한 오인식 샘플은 기존방법에 비해서 MNIST와 CIFAR10에서 각각 50%, 80% 감소된 반복횟수이면서 100% 공격률을 가진다.

Defending and Detecting Audio Adversarial Example using Frame Offsets

  • Gong, Yongkang;Yan, Diqun;Mao, Terui;Wang, Donghua;Wang, Rangding
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제15권4호
    • /
    • pp.1538-1552
    • /
    • 2021
  • Machine learning models are vulnerable to adversarial examples generated by adding a deliberately designed perturbation to a benign sample. Particularly, for automatic speech recognition (ASR) system, a benign audio which sounds normal could be decoded as a harmful command due to potential adversarial attacks. In this paper, we focus on the countermeasures against audio adversarial examples. By analyzing the characteristics of ASR systems, we find that frame offsets with silence clip appended at the beginning of an audio can degenerate adversarial perturbations to normal noise. For various scenarios, we exploit frame offsets by different strategies such as defending, detecting and hybrid strategy. Compared with the previous methods, our proposed method can defense audio adversarial example in a simpler, more generic and efficient way. Evaluated on three state-of-the-arts adversarial attacks against different ASR systems respectively, the experimental results demonstrate that the proposed method can effectively improve the robustness of ASR systems.

Study of oversampling algorithms for soil classifications by field velocity resistivity probe

  • Lee, Jong-Sub;Park, Junghee;Kim, Jongchan;Yoon, Hyung-Koo
    • Geomechanics and Engineering
    • /
    • 제30권3호
    • /
    • pp.247-258
    • /
    • 2022
  • A field velocity resistivity probe (FVRP) can measure compressional waves, shear waves and electrical resistivity in boreholes. The objective of this study is to perform the soil classification through a machine learning technique through elastic wave velocity and electrical resistivity measured by FVRP. Field and laboratory tests are performed, and the measured values are used as input variables to classify silt sand, sand, silty clay, and clay-sand mixture layers. The accuracy of k-nearest neighbors (KNN), naive Bayes (NB), random forest (RF), and support vector machine (SVM), selected to perform classification and optimize the hyperparameters, is evaluated. The accuracies are calculated as 0.76, 0.91, 0.94, and 0.88 for KNN, NB, RF, and SVM algorithms, respectively. To increase the amount of data at each soil layer, the synthetic minority oversampling technique (SMOTE) and conditional tabular generative adversarial network (CTGAN) are applied to overcome imbalance in the dataset. The CTGAN provides improved accuracy in the KNN, NB, RF and SVM algorithms. The results demonstrate that the measured values by FVRP can classify soil layers through three kinds of data with machine learning algorithms.

Generative Adversarial Networks를 이용한 Face Morphing 기법 연구 (Face Morphing Using Generative Adversarial Networks)

  • 한윤;김형중
    • 디지털콘텐츠학회 논문지
    • /
    • 제19권3호
    • /
    • pp.435-443
    • /
    • 2018
  • 최근 컴퓨팅 파워의 폭발적인 발전으로 컴퓨팅의 한계 라는 장벽이 사라지면서 딥러닝 이라는 이름 하에 순환 신경망(RNN), 합성곱 신경망(CNN) 등 다양한 모델들이 제안되어 컴퓨터 비젼(Computer Vision)의 수많은 난제들을 풀어나가고 있다. 2014년 발표된 대립쌍 모델(Generative Adversarial Network)은 비지도 학습에서도 컴퓨터 비젼의 문제들을 충분히 풀어나갈 수 있음을 보였고, 학습된 생성기를 활용하여 생성의 영역까지도 연구가 가능하게 하였다. GAN은 여러 가지 모델들과 결합하여 다양한 형태로 발전되고 있다. 기계학습에는 데이터 수집의 어려움이 있다. 너무 방대하면 노이즈를 제거를 통한 효과적인 데이터셋의 정제가 어렵고, 너무 작으면 작은 차이도 큰 노이즈가 되어 학습이 쉽지 않다. 본 논문에서는 GAN 모델에 영상 프레임 내의 얼굴 영역 추출을 위한 deep CNN 모델을 전처리 필터로 적용하여 두 사람의 제한된 수집데이터로 안정적으로 학습하여 다양한 표정의 합성 이미지를 만들어 낼 수 있는 방법을 제시하였다.