• Title/Summary/Keyword: Black-box attack

Search Result 8, Processing Time 0.024 seconds

Model Type Inference Attack Using Output of Black-Box AI Model (블랙 박스 모델의 출력값을 이용한 AI 모델 종류 추론 공격)

  • An, Yoonsoo;Choi, Daeseon
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.32 no.5
    • /
    • pp.817-826
    • /
    • 2022
  • AI technology is being successfully introduced in many fields, and models deployed as a service are deployed with black box environment that does not expose the model's information to protect intellectual property rights and data. In a black box environment, attackers try to steal data or parameters used during training by using model output. This paper proposes a method of inferring the type of model to directly find out the composition of layer of the target model, based on the fact that there is no attack to infer the information about the type of model from the deep learning model. With ResNet, VGGNet, AlexNet, and simple convolutional neural network models trained with MNIST datasets, we show that the types of models can be inferred using the output values in the gray box and black box environments of the each model. In addition, we inferred the type of model with approximately 83% accuracy in the black box environment if we train the big and small relationship feature that proposed in this paper together, the results show that the model type can be infrerred even in situations where only partial information is given to attackers, not raw probability vectors.

Query-Efficient Black-Box Adversarial Attack Methods on Face Recognition Model (얼굴 인식 모델에 대한 질의 효율적인 블랙박스 적대적 공격 방법)

  • Seo, Seong-gwan;Son, Baehoon;Yun, Joobeom
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.32 no.6
    • /
    • pp.1081-1090
    • /
    • 2022
  • The face recognition model is used for identity recognition of smartphones, providing convenience to many users. As a result, the security review of the DNN model is becoming important, with adversarial attacks present as a well-known vulnerability of the DNN model. Adversarial attacks have evolved to decision-based attack techniques that use only the recognition results of deep learning models to perform attacks. However, existing decision-based attack technique[14] have a problem that requires a large number of queries when generating adversarial examples. In particular, it takes a large number of queries to approximate the gradient. Therefore, in this paper, we propose a method of generating adversarial examples using orthogonal space sampling and dimensionality reduction sampling to avoid wasting queries that are consumed to approximate the gradient of existing decision-based attack technique[14]. Experiments show that our method can reduce the perturbation size of adversarial examples by about 2.4 compared to existing attack technique[14] and increase the attack success rate by 14% compared to existing attack technique[14]. Experimental results demonstrate that the adversarial example generation method proposed in this paper has superior attack performance.

Evaluating the web-application resiliency to business-layer DoS attacks

  • Alidoosti, Mitra;Nowroozi, Alireza;Nickabadi, Ahmad
    • ETRI Journal
    • /
    • v.42 no.3
    • /
    • pp.433-445
    • /
    • 2020
  • A denial-of-service (DoS) attack is a serious attack that targets web applications. According to Imperva, DoS attacks in the application layer comprise 60% of all the DoS attacks. Nowadays, attacks have grown into application- and business-layer attacks, and vulnerability-analysis tools are unable to detect business-layer vulnerabilities (logic-related vulnerabilities). This paper presents the business-layer dynamic application security tester (BLDAST) as a dynamic, black-box vulnerability-analysis approach to identify the business-logic vulnerabilities of a web application against DoS attacks. BLDAST evaluates the resiliency of web applications by detecting vulnerable business processes. The evaluation of six widely used web applications shows that BLDAST can detect the vulnerabilities with 100% accuracy. BLDAST detected 30 vulnerabilities in the selected web applications; more than half of the detected vulnerabilities were new and unknown. Furthermore, the precision of BLDAST for detecting the business processes is shown to be 94%, while the generated user navigation graph is improved by 62.8% because of the detection of similar web pages.

Recent Trends in Cryptanalysis Techniques for White-box Block Ciphers (화이트 박스 블록 암호에 대한 최신 암호분석 기술 동향 연구)

  • Chaerin Oh;Woosang Im;Hyunil Kim;Changho Seo
    • Smart Media Journal
    • /
    • v.12 no.9
    • /
    • pp.9-18
    • /
    • 2023
  • Black box cryptography is a cryptographic scheme based on a hardware encryption device, operating under the assumption that the device and the user can be trusted. However, with the increasing use of cryptographic algorithms on unreliable open platforms, the threats to black box cryptography systems have become even more significant. As a consequence, white box cryptography have been proposed to securely operate cryptographic algorithms on open platforms by hiding encryption keys during the encryption process, making it difficult for attackers to extract the keys. However, unlike traditional cryptography, white box-based encryption lacks established specifications, making challenging verify its structural security. To promote the safer utilization of white box cryptography, CHES organizes The WhibOx Contest periodically, which conducts safety analyses of various white box cryptographic techniques. Among these, the Differential Computation Analysis (DCA) attack proposed by Bos in 2016 is widely utilized in safety analyses and represents a powerful attack technique against robust white box block ciphers. Therefore, this paper analyzes the research trends in white box block ciphers and provides a summary of DCA attacks and relevant countermeasures. adhering to the format of a research paper.

A Source Code Cross-site Scripting Vulnerability Detection Method

  • Mu Chen;Lu Chen;Zhipeng Shao;Zaojian Dai;Nige Li;Xingjie Huang;Qian Dang;Xinjian Zhao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.6
    • /
    • pp.1689-1705
    • /
    • 2023
  • To deal with the potential XSS vulnerabilities in the source code of the power communication network, an XSS vulnerability detection method combining the static analysis method with the dynamic testing method is proposed. The static analysis method aims to analyze the structure and content of the source code. We construct a set of feature expressions to match malignant content and set a "variable conversion" method to analyze the data flow of the code that implements interactive functions. The static analysis method explores the vulnerabilities existing in the source code structure and code content. Dynamic testing aims to simulate network attacks to reflect whether there are vulnerabilities in web pages. We construct many attack vectors and implemented the test in the Selenium tool. Due to the combination of the two analysis methods, XSS vulnerability discovery research could be conducted from two aspects: "white-box testing" and "black-box testing". Tests show that this method can effectively detect XSS vulnerabilities in the source code of the power communication network.

Attack Cause Analysis Algorithm using Cyber BlackBox (사이버 블랙박스에 기반한 공격 원인 분석 알고리즘)

  • Choi, Sunoh;Lee, Jooyoung;Choi, Yangseo;Kim, Jonghyun;Kim, Ikkyun
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2014.11a
    • /
    • pp.392-394
    • /
    • 2014
  • 요즘 인터넷을 통하여 많은 사이버 공격이 일어나고 있다. 이에 대응하기 위하여 우리는 네트워크 패킷을 저장할 수 있는 사이버 블랙박스 시스템을 개발하고 사이버 블랙박스 시스템에서 수집한 많은 네트워크 패킷을 분석할 수 있는 효율적인 공격 원인 분석 알고리즘을 제안한다. 공격원인 분석 알고리즘을 통하여 우리는 사이버 공격에 발생했을 때 공격의 유입점이 어디이고 어떤 경로를 통해서 공격이 이루어졌는지 알 수 있다. 그 뿐만 아니라 숨겨진 피해자 발견 알고리즘을 통하여 알려진 피해자뿐만 아니라 알려지지 않은 다른 피해자를 찾을 수 있다.

A Substitute Model Learning Method Using Data Augmentation with a Decay Factor and Adversarial Data Generation Using Substitute Model (감쇠 요소가 적용된 데이터 어그멘테이션을 이용한 대체 모델 학습과 적대적 데이터 생성 방법)

  • Min, Jungki;Moon, Jong-sub
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.29 no.6
    • /
    • pp.1383-1392
    • /
    • 2019
  • Adversarial attack, which geneartes adversarial data to make target model misclassify the input data, is able to confuse real life applications of classification models and cause severe damage to the classification system. An Black-box adversarial attack learns a substitute model, which have similar decision boundary to the target model, and then generates adversarial data with the substitute model. Jacobian-based data augmentation is used to synthesize the training data to learn substitutes, but has a drawback that the data synthesized by the augmentation get distorted more and more as the training loop proceeds. We suggest data augmentation with 'decay factor' to alleviate this problem. The result shows that attack success rate of our method is higher(around 8.5%) than the existing method.

Empirical Study on Correlation between Performance and PSI According to Adversarial Attacks for Convolutional Neural Networks (컨벌루션 신경망 모델의 적대적 공격에 따른 성능과 개체군 희소 지표의 상관성에 관한 경험적 연구)

  • Youngseok Lee
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.17 no.2
    • /
    • pp.113-120
    • /
    • 2024
  • The population sparseness index(PSI) is being utilized to describe the functioning of internal layers in artificial neural networks from the perspective of neurons, shedding light on the black-box nature of the network's internal operations. There is research indicating a positive correlation between the PSI and performance in each layer of convolutional neural network models for image classification. In this study, we observed the internal operations of a convolutional neural network when adversarial examples were applied. The results of the experiments revealed a similar pattern of positive correlation for adversarial examples, which were modified to maintain 5% accuracy compared to applying benign data. Thus, while there may be differences in each adversarial attack, the observed PSI for adversarial examples demonstrated consistent positive correlations with benign data across layers.