• Title/Summary/Keyword: 적대적 공격

Search Result 78, Processing Time 0.035 seconds

Detecting Adversarial Examples Using Edge-based Classification

  • Jaesung Shim;Kyuri Jo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.10
    • /
    • pp.67-76
    • /
    • 2023
  • Although deep learning models are making innovative achievements in the field of computer vision, the problem of vulnerability to adversarial examples continues to be raised. Adversarial examples are attack methods that inject fine noise into images to induce misclassification, which can pose a serious threat to the application of deep learning models in the real world. In this paper, we propose a model that detects adversarial examples using differences in predictive values between edge-learned classification models and underlying classification models. The simple process of extracting the edges of the objects and reflecting them in learning can increase the robustness of the classification model, and economical and efficient detection is possible by detecting adversarial examples through differences in predictions between models. In our experiments, the general model showed accuracy of {49.9%, 29.84%, 18.46%, 4.95%, 3.36%} for adversarial examples (eps={0.02, 0.05, 0.1, 0.2, 0.3}), whereas the Canny edge model showed accuracy of {82.58%, 65.96%, 46.71%, 24.94%, 13.41%} and other edge models showed a similar level of accuracy also, indicating that the edge model was more robust against adversarial examples. In addition, adversarial example detection using differences in predictions between models revealed detection rates of {85.47%, 84.64%, 91.44%, 95.47%, and 87.61%} for each epsilon-specific adversarial example. It is expected that this study will contribute to improving the reliability of deep learning models in related research and application industries such as medical, autonomous driving, security, and national defense.

Secure Key Pre-distribution Scheme for Wireless Sensor Network (무선 센서 네트워크 환경에서의 안전한 키 분배 기법)

  • Jang Ji-Yong;Kim Hyoung-Jin;Kwon Tae-Kyoung;Song Joo-Seok
    • Proceedings of the Korea Institutes of Information Security and Cryptology Conference
    • /
    • 2006.06a
    • /
    • pp.447-450
    • /
    • 2006
  • 센서 네트워크는 앞으로 다방면에서 활용되어질 것으로 기대되고 있다. 그러나 센서 노드들은 악의적인 공격자에 의해 통신내용이 수집되고 공격을 받거나 조작될 수 있는 적대적이고 안전하지 못한 환경에 위치할 수 있다. 더욱이 센서 네트워크는 메모리, 전력, 계산능력에 있어서 상당히 제약적인 노드들로 구성되기 때문에 기존의 키 관리 기법을 적용하기 어려우며, 따라서 별도의 키 관리 기법이 요구되어진다. 본 논문에서는 센서 네트워크에서의 키 관리에 관련한 연구들을 소개하고 좀 더 안전한 키 분배 기법을 제안하고자 한다.

  • PDF

Trusted Networking Technology (고신뢰 네트워킹 기술)

  • Yae, B.H.;Park, J.D.
    • Electronics and Telecommunications Trends
    • /
    • v.30 no.1
    • /
    • pp.77-86
    • /
    • 2015
  • 최근 사이버 테러의 급증으로 인해 보안장치만으로는 방지의 한계를 나타내면서 사이버 공간에서의 네트워크 역할이 중요해지고 있다. 이에 따라 최근 세계 각국은 사이버 공격, 정보 유출 등에 의한 국가적 차원의 사이버 안보 위협을 미연에 방지하기 위해 적대국과 경쟁국이 생산한 네트워크 장비의 구매를 꺼리는 경향을 나타내고 있다. 이러한 사이버 공격, 정보 유출 등에 의한 국가적 차원의 사이버 안보 위협에 적극적으로 대처하고자 고신뢰 네트워킹 기술개발을 추진 중이다. 본고에서는 고신뢰 네트워킹 핵심기술에 해당되는 단말 관리 및 보안기술, WiFi 및 AP 기술, 네트워크 기술 및 네트워크 제어관리 기술에 대한 국내외 기술동향을 알아본다.

  • PDF

Random Noise Addition for Detecting Adversarially Generated Image Dataset (임의의 잡음 신호 추가를 활용한 적대적으로 생성된 이미지 데이터셋 탐지 방안에 대한 연구)

  • Hwang, Jeonghwan;Yoon, Ji Won
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.12 no.6
    • /
    • pp.629-635
    • /
    • 2019
  • In Deep Learning models derivative is implemented by error back-propagation which enables the model to learn the error and update parameters. It can find the global (or local) optimal points of parameters even in the complex models taking advantage of a huge improvement in computing power. However, deliberately generated data points can 'fool' models and degrade the performance such as prediction accuracy. Not only these adversarial examples reduce the performance but also these examples are not easily detectable with human's eyes. In this work, we propose the method to detect adversarial datasets with random noise addition. We exploit the fact that when random noise is added, prediction accuracy of non-adversarial dataset remains almost unchanged, but that of adversarial dataset changes. We set attack methods (FGSM, Saliency Map) and noise level (0-19 with max pixel value 255) as independent variables and difference of prediction accuracy when noise was added as dependent variable in a simulation experiment. We have succeeded in extracting the threshold that separates non-adversarial and adversarial dataset. We detected the adversarial dataset using this threshold.

Efficient Poisoning Attack Defense Techniques Based on Data Augmentation (데이터 증강 기반의 효율적인 포이즈닝 공격 방어 기법)

  • So-Eun Jeon;Ji-Won Ock;Min-Jeong Kim;Sa-Ra Hong;Sae-Rom Park;Il-Gu Lee
    • Convergence Security Journal
    • /
    • v.22 no.3
    • /
    • pp.25-32
    • /
    • 2022
  • Recently, the image processing industry has been activated as deep learning-based technology is introduced in the image recognition and detection field. With the development of deep learning technology, learning model vulnerabilities for adversarial attacks continue to be reported. However, studies on countermeasures against poisoning attacks that inject malicious data during learning are insufficient. The conventional countermeasure against poisoning attacks has a limitation in that it is necessary to perform a separate detection and removal operation by examining the training data each time. Therefore, in this paper, we propose a technique for reducing the attack success rate by applying modifications to the training data and inference data without a separate detection and removal process for the poison data. The One-shot kill poison attack, a clean label poison attack proposed in previous studies, was used as an attack model. The attack performance was confirmed by dividing it into a general attacker and an intelligent attacker according to the attacker's attack strategy. According to the experimental results, when the proposed defense mechanism is applied, the attack success rate can be reduced by up to 65% compared to the conventional method.

A Study on Improving Data Poisoning Attack Detection against Network Data Analytics Function in 5G Mobile Edge Computing (5G 모바일 에지 컴퓨팅에서 빅데이터 분석 기능에 대한 데이터 오염 공격 탐지 성능 향상을 위한 연구)

  • Ji-won Ock;Hyeon No;Yeon-sup Lim;Seong-min Kim
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.3
    • /
    • pp.549-559
    • /
    • 2023
  • As mobile edge computing (MEC) is gaining attention as a core technology of 5G networks, edge AI technology of 5G network environment based on mobile user data is recently being used in various fields. However, as in traditional AI security, there is a possibility of adversarial interference of standard 5G network functions within the core network responsible for edge AI core functions. In addition, research on data poisoning attacks that can occur in the MEC environment of standalone mode defined in 5G standards by 3GPP is currently insufficient compared to existing LTE networks. In this study, we explore the threat model for the MEC environment using NWDAF, a network function that is responsible for the core function of edge AI in 5G, and propose a feature selection method to improve the performance of detecting data poisoning attacks for Leaf NWDAF as some proof of concept. Through the proposed methodology, we achieved a maximum detection rate of 94.9% for Slowloris attack-based data poisoning attacks in NWDAF.

A Research on Adversarial Example-based Passive Air Defense Method against Object Detectable AI Drone (객체인식 AI적용 드론에 대응할 수 있는 적대적 예제 기반 소극방공 기법 연구)

  • Simun Yuk;Hweerang Park;Taisuk Suh;Youngho Cho
    • Journal of Internet Computing and Services
    • /
    • v.24 no.6
    • /
    • pp.119-125
    • /
    • 2023
  • Through the Ukraine-Russia war, the military importance of drones is being reassessed, and North Korea has completed actual verification through a drone provocation towards South Korea at 2022. Furthermore, North Korea is actively integrating artificial intelligence (AI) technology into drones, highlighting the increasing threat posed by drones. In response, the Republic of Korea military has established Drone Operations Command(DOC) and implemented various drone defense systems. However, there is a concern that the efforts to enhance capabilities are disproportionately focused on striking systems, making it challenging to effectively counter swarm drone attacks. Particularly, Air Force bases located adjacent to urban areas face significant limitations in the use of traditional air defense weapons due to concerns about civilian casualties. Therefore, this study proposes a new passive air defense method that aims at disrupting the object detection capabilities of AI models to enhance the survivability of friendly aircraft against the threat posed by AI based swarm drones. Using laser-based adversarial examples, the study seeks to degrade the recognition accuracy of object recognition AI installed on enemy drones. Experimental results using synthetic images and precision-reduced models confirmed that the proposed method decreased the recognition accuracy of object recognition AI, which was initially approximately 95%, to around 0-15% after the application of the proposed method, thereby validating the effectiveness of the proposed method.

Scaling Attack Method for Misalignment Error of Camera-LiDAR Calibration Model (카메라-라이다 융합 모델의 오류 유발을 위한 스케일링 공격 방법)

  • Yi-ji Im;Dae-seon Choi
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.6
    • /
    • pp.1099-1110
    • /
    • 2023
  • The recognition system of autonomous driving and robot navigation performs vision work such as object recognition, tracking, and lane detection after multi-sensor fusion to improve performance. Currently, research on a deep learning model based on the fusion of a camera and a lidar sensor is being actively conducted. However, deep learning models are vulnerable to adversarial attacks through modulation of input data. Attacks on the existing multi-sensor-based autonomous driving recognition system are focused on inducing obstacle detection by lowering the confidence score of the object recognition model.However, there is a limitation that an attack is possible only in the target model. In the case of attacks on the sensor fusion stage, errors in vision work after fusion can be cascaded, and this risk needs to be considered. In addition, an attack on LIDAR's point cloud data, which is difficult to judge visually, makes it difficult to determine whether it is an attack. In this study, image scaling-based camera-lidar We propose an attack method that reduces the accuracy of LCCNet, a fusion model (camera-LiDAR calibration model). The proposed method is to perform a scaling attack on the point of the input lidar. As a result of conducting an attack performance experiment by size with a scaling algorithm, an average of more than 77% of fusion errors were caused.

Trends in Blockchain-based Privacy Preserving Technology for Artificial Neural Networks (인공신경망에서의 블록체인 기반 개인정보보호 기술 동향)

  • Kang, Yea-Jun;Kim, Hyun-Ji;Lim, Se-Jin;Kim, Won-Woong;Seo, Hwa-Jeong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.05a
    • /
    • pp.564-567
    • /
    • 2022
  • 최근 딥러닝이 다양한 분야에서 활용됨에 따라 중앙 집중식 서버, 적대적 공격 그리고 데이터 부족 및 독점화와 같은 다양한 문제점이 발생하고 있다. 또한 연합학습을 수행할 경우, 클라이언트가 잘못된 기울기를 서버에 제공하거나 서버가 악의적인 행동을 할 경우 심각한 문제로 이어질 수 있다. 이와 같은 보안 취약점을 해결하기 위해 딥러닝에 블록체인을 결합하여 중앙 집중식 서버를 분산화하고 각 참여자 노드에게 인센티브를 줌으로써 신뢰할 수 있는 데이터를 수집하는 기법이 연구되고 있다. 본 논문에서는 위와 같이 딥러닝의 문제점을 해결하기 위해 블록체인이 어떻게 적용되었는지 살펴본다.

A Study on Effective Adversarial Attack Creation for Robustness Improvement of AI Models (AI 모델의 Robustness 향상을 위한 효율적인 Adversarial Attack 생성 방안 연구)

  • Si-on Jeong;Tae-hyun Han;Seung-bum Lim;Tae-jin Lee
    • Journal of Internet Computing and Services
    • /
    • v.24 no.4
    • /
    • pp.25-36
    • /
    • 2023
  • Today, as AI (Artificial Intelligence) technology is introduced in various fields, including security, the development of technology is accelerating. However, with the development of AI technology, attack techniques that cleverly bypass malicious behavior detection are also developing. In the classification process of AI models, an Adversarial attack has emerged that induces misclassification and a decrease in reliability through fine adjustment of input values. The attacks that will appear in the future are not new attacks created by an attacker but rather a method of avoiding the detection system by slightly modifying existing attacks, such as Adversarial attacks. Developing a robust model that can respond to these malware variants is necessary. In this paper, we propose two methods of generating Adversarial attacks as efficient Adversarial attack generation techniques for improving Robustness in AI models. The proposed technique is the XAI-based attack technique using the XAI technique and the Reference based attack through the model's decision boundary search. After that, a classification model was constructed through a malicious code dataset to compare performance with the PGD attack, one of the existing Adversarial attacks. In terms of generation speed, XAI-based attack, and reference-based attack take 0.35 seconds and 0.47 seconds, respectively, compared to the existing PGD attack, which takes 20 minutes, showing a very high speed, especially in the case of reference-based attack, 97.7%, which is higher than the existing PGD attack's generation rate of 75.5%. Therefore, the proposed technique enables more efficient Adversarial attacks and is expected to contribute to research to build a robust AI model in the future.