• Title/Summary/Keyword: adversarial attacks

Search Result 61, Processing Time 0.03 seconds

Resilience against Adversarial Examples: Data-Augmentation Exploiting Generative Adversarial Networks

  • Kang, Mingu;Kim, HyeungKyeom;Lee, Suchul;Han, Seokmin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.11
    • /
    • pp.4105-4121
    • /
    • 2021
  • Recently, malware classification based on Deep Neural Networks (DNN) has gained significant attention due to the rise in popularity of artificial intelligence (AI). DNN-based malware classifiers are a novel solution to combat never-before-seen malware families because this approach is able to classify malwares based on structural characteristics rather than requiring particular signatures like traditional malware classifiers. However, these DNN-based classifiers have been found to lack robustness against malwares that are carefully crafted to evade detection. These specially crafted pieces of malware are referred to as adversarial examples. We consider a clever adversary who has a thorough knowledge of DNN-based malware classifiers and will exploit it to generate a crafty malware to fool DNN-based classifiers. In this paper, we propose a DNN-based malware classifier that becomes resilient to these kinds of attacks by exploiting Generative Adversarial Network (GAN) based data augmentation. The experimental results show that the proposed scheme classifies malware, including AEs, with a false positive rate (FPR) of 3.0% and a balanced accuracy of 70.16%. These are respective 26.1% and 18.5% enhancements when compared to a traditional DNN-based classifier that does not exploit GAN.

Membership Inference Attack against Text-to-Image Model Based on Generating Adversarial Prompt Using Textual Inversion (Textual Inversion을 활용한 Adversarial Prompt 생성 기반 Text-to-Image 모델에 대한 멤버십 추론 공격)

  • Yoonju Oh;Sohee Park;Daeseon Choi
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.6
    • /
    • pp.1111-1123
    • /
    • 2023
  • In recent years, as generative models have developed, research that threatens them has also been actively conducted. We propose a new membership inference attack against text-to-image model. Existing membership inference attacks on Text-to-Image models produced a single image as captions of query images. On the other hand, this paper uses personalized embedding in query images through Textual Inversion. And we propose a membership inference attack that effectively generates multiple images as a method of generating Adversarial Prompt. In addition, the membership inference attack is tested for the first time on the Stable Diffusion model, which is attracting attention among the Text-to-Image models, and achieve an accuracy of up to 1.00.

Presentation Attacks in Palmprint Recognition Systems

  • Sun, Yue;Wang, Changkun
    • Journal of Multimedia Information System
    • /
    • v.9 no.2
    • /
    • pp.103-112
    • /
    • 2022
  • Background: A presentation attack places the printed image or displayed video at the front of the sensor to deceive the biometric recognition system. Usually, presentation attackers steal a genuine user's biometric image and use it for presentation attack. In recent years, reconstruction attack and adversarial attack can generate high-quality fake images, and have high attack success rates. However, their attack rates degrade remarkably after image shooting. Methods: In order to comprehensively analyze the threat of presentation attack to palmprint recognition system, this paper makes six palmprint presentation attack datasets. The datasets were tested on texture coding-based recognition methods and deep learning-based recognition methods. Results and conclusion: The experimental results show that the presentation attack caused by the leakage of the original image has a high success rate and a great threat; while the success rates of reconstruction attack and adversarial attack decrease significantly.

Improving the Robustness of Deepfake Detection Models Against Adversarial Attacks (적대적 공격에 따른 딥페이크 탐지 모델 강화)

  • Lee, Sangyeong;Hou, Jong-Uk
    • Annual Conference of KIPS
    • /
    • 2022.11a
    • /
    • pp.724-726
    • /
    • 2022
  • 딥페이크(deepfake)로 인한 디지털 범죄는 날로 교묘해지면서 사회적으로 큰 파장을 불러일으키고 있다. 이때, 딥러닝 기반 모델의 오류를 발생시키는 적대적 공격(adversarial attack)의 등장으로 딥페이크를 탐지하는 모델의 취약성이 증가하고 있고, 이는 매우 치명적인 결과를 초래한다. 본 연구에서는 2 가지 방법을 통해 적대적 공격에도 영향을 받지 않는 강인한(robust) 모델을 구축하는 것을 목표로 한다. 모델 강화 기법인 적대적 학습(adversarial training)과 영상처리 기반 방어 기법인 크기 변환(resizing), JPEG 압축을 통해 적대적 공격에 대한 강인성을 입증한다.

A Beacon-Based Trust Management System for Enhancing User Centric Location Privacy in VANETs

  • Chen, Yi-Ming;Wei, Yu-Chih
    • Journal of Communications and Networks
    • /
    • v.15 no.2
    • /
    • pp.153-163
    • /
    • 2013
  • In recent years, more and more researches have been focusing on trust management of vehicle ad-hoc networks (VANETs) for improving the safety of vehicles. However, in these researches, little attention has been paid to the location privacy due to the natural conflict between trust and anonymity, which is the basic protection of privacy. Although traffic safety remains the most crucial issue in VANETs, location privacy can be just as important for drivers, and neither can be ignored. In this paper, we propose a beacon-based trust management system, called BTM, that aims to thwart internal attackers from sending false messages in privacy-enhanced VANETs. To evaluate the reliability and performance of the proposed system, we conducted a set of simulations under alteration attacks, bogus message attacks, and message suppression attacks. The simulation results show that the proposed system is highly resilient to adversarial attacks, whether it is under a fixed silent period or random silent period location privacy-enhancement scheme.

Generating Audio Adversarial Examples Using a Query-Efficient Decision-Based Attack (질의 효율적인 의사 결정 공격을 통한 오디오 적대적 예제 생성 연구)

  • Seo, Seong-gwan;Mun, Hyunjun;Son, Baehoon;Yun, Joobeom
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.32 no.1
    • /
    • pp.89-98
    • /
    • 2022
  • As deep learning technology was applied to various fields, research on adversarial attack techniques, a security problem of deep learning models, was actively studied. adversarial attacks have been mainly studied in the field of images. Recently, they have even developed a complete decision-based attack technique that can attack with just the classification results of the model. However, in the case of the audio field, research is relatively slow. In this paper, we applied several decision-based attack techniques to the audio field and improved state-of-the-art attack techniques. State-of-the-art decision-attack techniques have the disadvantage of requiring many queries for gradient approximation. In this paper, we improve query efficiency by proposing a method of reducing the vector search space required for gradient approximation. Experimental results showed that the attack success rate was increased by 50%, and the difference between original audio and adversarial examples was reduced by 75%, proving that our method could generate adversarial examples with smaller noise.

Empirical Study on Correlation between Performance and PSI According to Adversarial Attacks for Convolutional Neural Networks (컨벌루션 신경망 모델의 적대적 공격에 따른 성능과 개체군 희소 지표의 상관성에 관한 경험적 연구)

  • Youngseok Lee
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.17 no.2
    • /
    • pp.113-120
    • /
    • 2024
  • The population sparseness index(PSI) is being utilized to describe the functioning of internal layers in artificial neural networks from the perspective of neurons, shedding light on the black-box nature of the network's internal operations. There is research indicating a positive correlation between the PSI and performance in each layer of convolutional neural network models for image classification. In this study, we observed the internal operations of a convolutional neural network when adversarial examples were applied. The results of the experiments revealed a similar pattern of positive correlation for adversarial examples, which were modified to maintain 5% accuracy compared to applying benign data. Thus, while there may be differences in each adversarial attack, the observed PSI for adversarial examples demonstrated consistent positive correlations with benign data across layers.

Class Specific Autoencoders Enhance Sample Diversity

  • Kumar, Teerath;Park, Jinbae;Ali, Muhammad Salman;Uddin, AFM Shahab;Bae, Sung-Ho
    • Journal of Broadcast Engineering
    • /
    • v.26 no.7
    • /
    • pp.844-854
    • /
    • 2021
  • Semi-supervised learning (SSL) and few-shot learning (FSL) have shown impressive performance even then the volume of labeled data is very limited. However, SSL and FSL can encounter a significant performance degradation if the diversity gap between the labeled and unlabeled data is high. To reduce this diversity gap, we propose a novel scheme that relies on an autoencoder for generating pseudo examples. Specifically, the autoencoder is trained on a specific class using the available labeled data and the decoder of the trained autoencoder is then used to generate N samples of that specific class based on N random noise, sampled from a standard normal distribution. The above process is repeated for all the classes. Consequently, the generated data reduces the diversity gap and enhances the model performance. Extensive experiments on MNIST and FashionMNIST datasets for SSL and FSL verify the effectiveness of the proposed approach in terms of classification accuracy and robustness against adversarial attacks.

Survey Adversarial Attacks and Neural Rendering (적대적 공격과 뉴럴 렌더링 연구 동향 조사)

  • Lee, Ye Jin;Shim, Bo Seok;Hou, Jong-Uk
    • Annual Conference of KIPS
    • /
    • 2022.11a
    • /
    • pp.243-245
    • /
    • 2022
  • 다양한 분야에서 심층 신경망 기반 모델이 사용되면서 뛰어난 성능을 보이고 있다. 그러나 기계학습 모델의 오작동을 유도하는 적대적 공격(adversarial attack)에 의해 심층 신경망 모델의 취약성이 드러났다. 보안 분야에서는 이러한 취약성을 보완하기 위해 의도적으로 모델을 공격함으로써 모델의 강건함을 검증한다. 현재 2D 이미지에 대한 적대적 공격은 활발한 연구가 이루어지고 있지만, 3D 데이터에 대한 적대적 공격 연구는 그렇지 않은 실정이다. 본 논문에서는 뉴럴 렌더링(neural rendering)과 적대적 공격, 그리고 3D 표현에 적대적 공격을 적용한 연구를 조사해 이를 통해 추후 뉴럴 렌더링에서 일어나는 적대적 공격 연구에 도움이 될 것을 기대한다.

A Study on Countermeasures Against Adversarial Attacks on AI Models (AI 모델의 적대적 공격 대응 방안에 대한 연구)

  • Jae-Gyung Park;Jun-Seo Chang
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2023.07a
    • /
    • pp.619-620
    • /
    • 2023
  • 본 논문에서는 AI 모델이 노출될 수 있는 적대적 공격을 연구한 논문이다. AI 쳇봇이 적대적 공격에 노출됨에 따라 최근 보안 침해 사례가 다수 발생하고 있다. 이에 대해 본 논문에서는 적대적 공격이 무엇인지 조사하고 적대적 공격에 대응하거나 사전에 방어하는 방안을 연구하고자 한다. 적대적 공격의 종류 4가지와 대응 방안을 조사하고, AI 모델의 보안 중요성을 강조하고 있다. 또한, 이런 적대적 공격을 방어할 수 있도록 대응 방안을 추가로 조사해야 한다고 결론을 내리고 있다.

  • PDF