• Title/Summary/Keyword: 적대적 공격

Search Result 78, Processing Time 0.041 seconds

StarGAN-Based Detection and Purification Studies to Defend against Adversarial Attacks (적대적 공격을 방어하기 위한 StarGAN 기반의 탐지 및 정화 연구)

  • Sungjune Park;Gwonsang Ryu;Daeseon Choi
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.3
    • /
    • pp.449-458
    • /
    • 2023
  • Artificial Intelligence is providing convenience in various fields using big data and deep learning technologies. However, deep learning technology is highly vulnerable to adversarial examples, which can cause misclassification of classification models. This study proposes a method to detect and purification various adversarial attacks using StarGAN. The proposed method trains a StarGAN model with added Categorical Entropy loss using adversarial examples generated by various attack methods to enable the Discriminator to detect adversarial examples and the Generator to purification them. Experimental results using the CIFAR-10 dataset showed an average detection performance of approximately 68.77%, an average purification performance of approximately 72.20%, and an average defense performance of approximately 93.11% derived from restoration and detection performance.

The Relationship between Rejection Sensitivity and Reactive Aggression in University Students: Mediating Effects of Self-Concept Clarity and Hostile Attribution Bias (대학생의 거부민감성과 반응적 공격성 간의 관계: 자기개념 명확성과 적대적 귀인편향의 매개효과)

  • Geonhee Lee ;Minkyu Rhee
    • Korean Journal of Culture and Social Issue
    • /
    • v.29 no.4
    • /
    • pp.477-496
    • /
    • 2023
  • The purpose of this study is to examine the relationship between rejection sensitivity and reactive aggression among college students, as well as to determine the mediating effects of self-concept clarity and hostile attribution bias on the relationship between rejection sensitivity and reactive aggression. A self-report questionnaire was conducted online for the purpose of gathering data from university students aged 18 years and older. A total of 250 participants were included in the analysis. SPSS 27.0 was used for data analysis to check the basic statistics of the variables, frequency analysis, reliability analysis, and correlation analysis. In addition, the model fit was checked using Amos 21.0, and the bootstrapping method verified the significance of the indirect effect. The results of this study are as follows. The results of this study are as follows. First, rejection sensitivity positively affects reactive aggression through self-concept clarity. Second, rejection sensitivity increases the hostile attribution bias, leading to an increase in reactive aggression. Third, rejection sensitivity positively influences reactive aggression in an indirect way by sequentially affecting self-concept clarity and hostile attribution bias. These findings have implications as they identify psychological factors that affect reactive aggression in college students. This suggests the importance of utilizing psychological interventions to address reactive aggression associated with social problems, such as crime, and provides a foundation for both treatment and prevention. Finally, implications for further research and limitations of this study are suggested.

Secure Self-Driving Car System Resistant to the Adversarial Evasion Attacks (적대적 회피 공격에 대응하는 안전한 자율주행 자동차 시스템)

  • Seungyeol Lee;Hyunro Lee;Jaecheol Ha
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.6
    • /
    • pp.907-917
    • /
    • 2023
  • Recently, a self-driving car have applied deep learning technology to advanced driver assistance system can provide convenience to drivers, but it is shown deep that learning technology is vulnerable to adversarial evasion attacks. In this paper, we performed five adversarial evasion attacks, including MI-FGSM(Momentum Iterative-Fast Gradient Sign Method), targeting the object detection algorithm YOLOv5 (You Only Look Once), and measured the object detection performance in terms of mAP(mean Average Precision). In particular, we present a method applying morphology operations for YOLO to detect objects normally by removing noise and extracting boundary. As a result of analyzing its performance through experiments, when an adversarial attack was performed, YOLO's mAP dropped by at least 7.9%. The YOLO applied our proposed method can detect objects up to 87.3% of mAP performance.

딥러닝 기반 얼굴인식 모델에 대한 변조 영역 제한 기만공격

  • Ryu, Gwonsang;Park, Hosung;Choi, Daeseon
    • Review of KIISC
    • /
    • v.29 no.3
    • /
    • pp.44-50
    • /
    • 2019
  • 최근 딥러닝 기술은 다양한 분야에서 놀라운 성능을 보여주고 있어 많은 서비스에 적용되고 있다. 얼굴인식 또한 딥러닝 기술을 접목하여 높은 수준으로 얼굴인식이 가능해졌다. 하지만 딥러닝 기술은 원본 이미지를 최소한으로 변조시켜 딥러닝 모델의 오인식을 발생시키는 적대적 예제에 취약하다. 이에 따라, 본 논문에서는 딥러닝 기반 얼굴인식 시스템에 대해 적대적 예제를 이용하여 기만공격 실험을 수행하였으며 실제 얼굴에 분장할 수 있는 영역을 고려하여 설정된 변조 영역에 따른 기만공격 성능을 분석한다.

Adversarial Example Detection Based on Symbolic Representation of Image (이미지의 Symbolic Representation 기반 적대적 예제 탐지 방법)

  • Park, Sohee;Kim, Seungjoo;Yoon, Hayeon;Choi, Daeseon
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.32 no.5
    • /
    • pp.975-986
    • /
    • 2022
  • Deep learning is attracting great attention, showing excellent performance in image processing, but is vulnerable to adversarial attacks that cause the model to misclassify through perturbation on input data. Adversarial examples generated by adversarial attacks are minimally perturbated where it is difficult to identify, so visual features of the images are not generally changed. Unlikely deep learning models, people are not fooled by adversarial examples, because they classify the images based on such visual features of images. This paper proposes adversarial attack detection method using Symbolic Representation, which is a visual and symbolic features such as color, shape of the image. We detect a adversarial examples by comparing the converted Symbolic Representation from the classification results for the input image and Symbolic Representation extracted from the input images. As a result of measuring performance on adversarial examples by various attack method, detection rates differed depending on attack targets and methods, but was up to 99.02% for specific target attack.

Study of Adversarial Attack and Defense Deep Learning Model for Autonomous Driving (자율주행을 위한 적대적 공격 및 방어 딥러닝 모델 연구)

  • Kim, Chae-Hyeon;Lee, Jin-Kyu;Jung, Eun;Jung, Jae-Ho;Lee, Hyun-Jung;Lee, Gyu-Young
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.11a
    • /
    • pp.803-805
    • /
    • 2022
  • 자율주행의 시대가 도래함에 따라, 딥러닝 모델에 대한 적대적 공격 위험이 함께 증가하고 있다. 카메라 기반 자율주행차량이 공격받을 경우 보행자나 표지판 등에 대한 오분류로 인해 심각한 사고로 이어질 수 있어, 자율주행 시스템에서의 적대적 공격에 대한 방어 및 보안 기술 연구가 필수적이다. 이에 본 논문에서는 GTSRB 표지판 데이터를 이용하여 각종 공격 및 방어 기법을 개발하고 제안한다. 시간 및 정확도 측면에서 성능을 비교함으로써, 자율주행에 최적인 모델을 탐구하고 더 나아가 해당 모델들의 완전자율주행을 위한 발전 방향을 제안한다.

Research Trends of Adversarial Attack Techniques in Text (텍스트 분야 적대적 공격 기법 연구 동향)

  • Kim, Bo-Geum;Kang, Hyo-Eun;Kim, Yongsu;Kim, Ho-Won
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.11a
    • /
    • pp.420-422
    • /
    • 2022
  • 인공지능 기술이 문서 분류, 얼굴 인식, 자율 주행 등 실생활 전반에 걸쳐 다양한 분야에 적용됨에 따라, 인공지능 모델에 대한 취약점을 미리 파악하고 대비하는 기술의 중요성이 높아지고 있다. 이미지 영역에서는 입력 데이터에 작은 섭동을 추가해 신경망을 속이는 방법인 적대적 공격 연구가 활발하게 이루어졌지만, 텍스트 영역에서는 텍스트 데이터의 이산적인 특징으로 인해 연구에 어려움이 존재한다. 본 논문은 텍스트 분야 인공지능 기술에 대한 적대적 공격 기법을 분석하고 연구의 필요성을 살펴보고자 한다.

Performance Comparison of Neural Network Models for Adversarial Attacks by Autonomous Ships (자율주행 선박의 적대적 공격에 대한 신경망 모델의 성능 비교)

  • Tae-Hoon Her;Ju-Hyeong Kim;Na-Hyun Kim;So-Yeon Kim
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.1106-1107
    • /
    • 2023
  • 자율주행 선박의 기술 발전에 따라 적대적 공격에 대한 위험성이 대두되고 있다. 이를 해결하기 위해 본 연구는 다양한 신경망 모델을 활용하여 적대적 공격을 탐지하는 성능을 체계적으로 비교, 분석하였다. CNN, GRU, LSTM, VGG16 모델을 사용하여 실험을 진행하였고, 이 중 VGG16 모델이 가장 높은 탐지 성능을 보였다. 본 연구의 결과를 통해 자율주행 선박에 적용될 수 있는 보안모델 구축에 대한 신뢰성 있는 방향성을 제시하고자 한다.