• Title/Summary/Keyword: Adversarial example

Search Result 23, Processing Time 0.023 seconds

GAN Based Adversarial CAN Frame Generation Method for Physical Attack Evading Intrusion Detection System (Intrusion Detection System을 회피하고 Physical Attack을 하기 위한 GAN 기반 적대적 CAN 프레임 생성방법)

  • Kim, Dowan;Choi, Daeseon
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.31 no.6
    • /
    • pp.1279-1290
    • /
    • 2021
  • As vehicle technology has grown, autonomous driving that does not require driver intervention has developed. Accordingly, CAN security, an network of in-vehicles, has also become important. CAN shows vulnerabilities in hacking attacks, and machine learning-based IDS is introduced to detect these attacks. However, despite its high accuracy, machine learning showed vulnerability against adversarial examples. In this paper, we propose a adversarial CAN frame generation method to avoid IDS by adding noise to feature and proceeding with feature selection and re-packet for physical attack of the vehicle. We check how well the adversarial CAN frame avoids IDS through experiments for each case that adversarial CAN frame generated by all feature modulation, modulation after feature selection, preprocessing after re-packet.

Study on the White Noise effect Against Adversarial Attack for Deep Learning Model for Image Recognition (영상 인식을 위한 딥러닝 모델의 적대적 공격에 대한 백색 잡음 효과에 관한 연구)

  • Lee, Youngseok;Kim, Jongweon
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.15 no.1
    • /
    • pp.27-35
    • /
    • 2022
  • In this paper we propose white noise adding method to prevent missclassification of deep learning system by adversarial attacks. The proposed method is that adding white noise to input image that is benign or adversarial example. The experimental results are showing that the proposed method is robustness to 3 adversarial attacks such as FGSM attack, BIN attack and CW attack. The recognition accuracies of Resnet model with 18, 34, 50 and 101 layers are enhanced when white noise is added to test data set while it does not affect to classification of benign test dataset. The proposed model is applicable to defense to adversarial attacks and replace to time- consuming and high expensive defense method against adversarial attacks such as adversarial training method and deep learning replacing method.

Query-Efficient Black-Box Adversarial Attack Methods on Face Recognition Model (얼굴 인식 모델에 대한 질의 효율적인 블랙박스 적대적 공격 방법)

  • Seo, Seong-gwan;Son, Baehoon;Yun, Joobeom
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.32 no.6
    • /
    • pp.1081-1090
    • /
    • 2022
  • The face recognition model is used for identity recognition of smartphones, providing convenience to many users. As a result, the security review of the DNN model is becoming important, with adversarial attacks present as a well-known vulnerability of the DNN model. Adversarial attacks have evolved to decision-based attack techniques that use only the recognition results of deep learning models to perform attacks. However, existing decision-based attack technique[14] have a problem that requires a large number of queries when generating adversarial examples. In particular, it takes a large number of queries to approximate the gradient. Therefore, in this paper, we propose a method of generating adversarial examples using orthogonal space sampling and dimensionality reduction sampling to avoid wasting queries that are consumed to approximate the gradient of existing decision-based attack technique[14]. Experiments show that our method can reduce the perturbation size of adversarial examples by about 2.4 compared to existing attack technique[14] and increase the attack success rate by 14% compared to existing attack technique[14]. Experimental results demonstrate that the adversarial example generation method proposed in this paper has superior attack performance.

Research of a Method of Generating an Adversarial Sample Using Grad-CAM (Grad-CAM을 이용한 적대적 예제 생성 기법 연구)

  • Kang, Sehyeok
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.6
    • /
    • pp.878-885
    • /
    • 2022
  • Research in the field of computer vision based on deep learning is being actively conducted. However, deep learning-based models have vulnerabilities in adversarial attacks that increase the model's misclassification rate by applying adversarial perturbation. In particular, in the case of FGSM, it is recognized as one of the effective attack methods because it is simple, fast and has a considerable attack success rate. Meanwhile, as one of the efforts to visualize deep learning models, Grad-CAM enables visual explanation of convolutional neural networks. In this paper, I propose a method to generate adversarial examples with high attack success rate by applying Grad-CAM to FGSM. The method chooses fixels, which are closely related to labels, by using Grad-CAM and add perturbations to the fixels intensively. The proposed method has a higher success rate than the FGSM model in the same perturbation for both targeted and untargeted examples. In addition, unlike FGSM, it has the advantage that the distribution of noise is not uniform, and when the success rate is increased by repeatedly applying noise, the attack is successful with fewer iterations.

Counterfactual image generation by disentangling data attributes with deep generative models

  • Jieon Lim;Weonyoung Joo
    • Communications for Statistical Applications and Methods
    • /
    • v.30 no.6
    • /
    • pp.589-603
    • /
    • 2023
  • Deep generative models target to infer the underlying true data distribution, and it leads to a huge success in generating fake-but-realistic data. Regarding such a perspective, the data attributes can be a crucial factor in the data generation process since non-existent counterfactual samples can be generated by altering certain factors. For example, we can generate new portrait images by flipping the gender attribute or altering the hair color attributes. This paper proposes counterfactual disentangled variational autoencoder generative adversarial networks (CDVAE-GAN), specialized for data attribute level counterfactual data generation. The structure of the proposed CDVAE-GAN consists of variational autoencoders and generative adversarial networks. Specifically, we adopt a Gaussian variational autoencoder to extract low-dimensional disentangled data features and auxiliary Bernoulli latent variables to model the data attributes separately. Also, we utilize a generative adversarial network to generate data with high fidelity. By enjoying the benefits of the variational autoencoder with the additional Bernoulli latent variables and the generative adversarial network, the proposed CDVAE-GAN can control the data attributes, and it enables producing counterfactual data. Our experimental result on the CelebA dataset qualitatively shows that the generated samples from CDVAE-GAN are realistic. Also, the quantitative results support that the proposed model can produce data that can deceive other machine learning classifiers with the altered data attributes.

Comparison of Adversarial Example Restoration Performance of VQ-VAE Model with or without Image Segmentation (이미지 분할 여부에 따른 VQ-VAE 모델의 적대적 예제 복원 성능 비교)

  • Tae-Wook Kim;Seung-Min Hyun;Ellen J. Hong
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.23 no.4
    • /
    • pp.194-199
    • /
    • 2022
  • Preprocessing for high-quality data is required for high accuracy and usability in various and complex image data-based industries. However, when a contaminated hostile example that combines noise with existing image or video data is introduced, which can pose a great risk to the company, it is necessary to restore the previous damage to ensure the company's reliability, security, and complete results. As a countermeasure for this, restoration was previously performed using Defense-GAN, but there were disadvantages such as long learning time and low quality of the restoration. In order to improve this, this paper proposes a method using adversarial examples created through FGSM according to image segmentation in addition to using the VQ-VAE model. First, the generated examples are classified as a general classifier. Next, the unsegmented data is put into the pre-trained VQ-VAE model, restored, and then classified with a classifier. Finally, the data divided into quadrants is put into the 4-split-VQ-VAE model, the reconstructed fragments are combined, and then put into the classifier. Finally, after comparing the restored results and accuracy, the performance is analyzed according to the order of combining the two models according to whether or not they are split.

A Research on Adversarial Example-based Passive Air Defense Method against Object Detectable AI Drone (객체인식 AI적용 드론에 대응할 수 있는 적대적 예제 기반 소극방공 기법 연구)

  • Simun Yuk;Hweerang Park;Taisuk Suh;Youngho Cho
    • Journal of Internet Computing and Services
    • /
    • v.24 no.6
    • /
    • pp.119-125
    • /
    • 2023
  • Through the Ukraine-Russia war, the military importance of drones is being reassessed, and North Korea has completed actual verification through a drone provocation towards South Korea at 2022. Furthermore, North Korea is actively integrating artificial intelligence (AI) technology into drones, highlighting the increasing threat posed by drones. In response, the Republic of Korea military has established Drone Operations Command(DOC) and implemented various drone defense systems. However, there is a concern that the efforts to enhance capabilities are disproportionately focused on striking systems, making it challenging to effectively counter swarm drone attacks. Particularly, Air Force bases located adjacent to urban areas face significant limitations in the use of traditional air defense weapons due to concerns about civilian casualties. Therefore, this study proposes a new passive air defense method that aims at disrupting the object detection capabilities of AI models to enhance the survivability of friendly aircraft against the threat posed by AI based swarm drones. Using laser-based adversarial examples, the study seeks to degrade the recognition accuracy of object recognition AI installed on enemy drones. Experimental results using synthetic images and precision-reduced models confirmed that the proposed method decreased the recognition accuracy of object recognition AI, which was initially approximately 95%, to around 0-15% after the application of the proposed method, thereby validating the effectiveness of the proposed method.

A method based on Multi-Convolution layers Joint and Generative Adversarial Networks for Vehicle Detection

  • Han, Guang;Su, Jinpeng;Zhang, Chengwei
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.4
    • /
    • pp.1795-1811
    • /
    • 2019
  • In order to achieve rapid and accurate detection of vehicle objects in complex traffic conditions, we propose a novel vehicle detection method. Firstly, more contextual and small-object vehicle information can be obtained by our Joint Feature Network (JFN). Secondly, our Evolved Region Proposal Network (EPRN) generates initial anchor boxes by adding an improved version of the region proposal network in this network, and at the same time filters out a large number of false vehicle boxes by soft-Non Maximum Suppression (NMS). Then, our Mask Network (MaskN) generates an example that includes the vehicle occlusion, the generator and discriminator can learn from each other in order to further improve the vehicle object detection capability. Finally, these candidate vehicle detection boxes are optimized to obtain the final vehicle detection boxes by the Fine-Tuning Network(FTN). Through the evaluation experiment on the DETRAC benchmark dataset, we find that in terms of mAP, our method exceeds Faster-RCNN by 11.15%, YOLO by 11.88%, and EB by 1.64%. Besides, our algorithm also has achieved top2 comaring with MS-CNN, YOLO-v3, RefineNet, RetinaNet, Faster-rcnn, DSSD and YOLO-v2 of vehicle category in KITTI dataset.

Influence Maximization Scheme against Various Social Adversaries

  • Noh, Giseop;Oh, Hayoung;Lee, Jaehoon
    • Journal of information and communication convergence engineering
    • /
    • v.16 no.4
    • /
    • pp.213-220
    • /
    • 2018
  • With the exponential developments of social network, their fundamental role as a medium to spread information, ideas, and influence has gained importance. It can be expressed by the relationships and interactions within a group of individuals. Therefore, some models and researches from various domains have been in response to the influence maximization problem for the effects of "word of mouth" of new products. For example, in reality, more than two related social groups such as commercial companies and service providers exist within the same market issue. Under such a scenario, they called social adversaries competitively try to occupy their market influence against each other. To address the influence maximization (IM) problem between them, we propose a novel IM problem for social adversarial players (IM-SA) which are exploiting the social network attributes to infer the unknown adversary's network configuration. We sophisticatedly define mathematical closed form to demonstrate that the proposed scheme can have a near-optimal solution for a player.

Artificial Intelligence-Based Video Content Generation (인공지능 기반 영상 콘텐츠 생성 기술 동향)

  • Son, J.W.;Han, M.H.;Kim, S.J.
    • Electronics and Telecommunications Trends
    • /
    • v.34 no.3
    • /
    • pp.34-42
    • /
    • 2019
  • This study introduces artificial intelligence (AI) techniques for video generation. For an effective illustration, techniques for video generation are classified as either semi-automatic or automatic. First, we discuss some recent achievements in semi-automatic video generation, and explain which types of AI techniques can be applied to produce films and improve film quality. Additionally, we provide an example of video content that has been generated by using AI techniques. Then, two automatic video-generation techniques are introduced with technical details. As there is currently no feasible automatic video-generation technique that can generate commercial videos, in this study, we explain their technical details, and suggest the future direction for researchers. Finally, we discuss several considerations for more practical automatic video-generation techniques.