• Title/Summary/Keyword: Noise Attack

Search Result 136, Processing Time 0.024 seconds

PingPong 256 shuffling method with Image Encryption and Resistance to Various Noise (이미지 암호화 및 다양한 잡음에 내성을 갖춘 PingPong 256 Shuffling 방법)

  • Kim, Ki Hwan;Lee, Hoon Jae
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.11
    • /
    • pp.1507-1518
    • /
    • 2020
  • High-quality images have a lot of information, so sensitive data is stored by encryption for private company, military etc. Encrypted images can only be decrypted with a secret key, but the original data cannot be retained when attacked by the Shear attack and Noise pollution attack techniques that overwrite some pixel data with arbitrary values. Important data is the more necessary a countermeasure for the recovery method against attack. In this paper, we propose a random number generator PingPong256 and a shuffling method that rearranges pixels to resist Shear attack and Noise pollution attack techniques so that image and video encryption can be performed more quickly. Next, the proposed PingPong256 was examined with SP800-22, tested for immunity to various noises, and verified whether the image to which the shuffling method was applied satisfies the Anti-shear attack and the Anti-noise pollution attack.

Self-Noise Prediction from Helicopter Rotor Blade (헬리콥터 로터 블레이드의 자려소음 예측)

  • Kim, Hyo-Young;Ryu, Ki-Wahn
    • Journal of Aerospace System Engineering
    • /
    • v.1 no.1
    • /
    • pp.73-78
    • /
    • 2007
  • Self-noise from the rotor blade of the UH-1H Helicopter is obtained numerically by using the Brooks' empirical noise model. All of the five noise sources are compared with each other in frequency domain. From the calculated results the bluntness noise reveals dominant noise sources at small angel of attack, whereas the separation noise shows main noise term with gradually increasing angel of attack. From the results of two different tip Mach numbers with the change of angel of attack, the OASPLs at M = 0.8 show about 15dB larger than those at M = 0.4.

  • PDF

Study on the White Noise effect Against Adversarial Attack for Deep Learning Model for Image Recognition (영상 인식을 위한 딥러닝 모델의 적대적 공격에 대한 백색 잡음 효과에 관한 연구)

  • Lee, Youngseok;Kim, Jongweon
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.15 no.1
    • /
    • pp.27-35
    • /
    • 2022
  • In this paper we propose white noise adding method to prevent missclassification of deep learning system by adversarial attacks. The proposed method is that adding white noise to input image that is benign or adversarial example. The experimental results are showing that the proposed method is robustness to 3 adversarial attacks such as FGSM attack, BIN attack and CW attack. The recognition accuracies of Resnet model with 18, 34, 50 and 101 layers are enhanced when white noise is added to test data set while it does not affect to classification of benign test dataset. The proposed model is applicable to defense to adversarial attacks and replace to time- consuming and high expensive defense method against adversarial attacks such as adversarial training method and deep learning replacing method.

Rapid Misclassification Sample Generation Attack on Deep Neural Network (딥뉴럴네트워크 상에 신속한 오인식 샘플 생성 공격)

  • Kwon, Hyun;Park, Sangjun;Kim, Yongchul
    • Convergence Security Journal
    • /
    • v.20 no.2
    • /
    • pp.111-121
    • /
    • 2020
  • Deep neural networks (DNNs) provide good performance for machine learning tasks such as image recognition and object recognition. However, DNNs are vulnerable to an adversarial example. An adversarial example is an attack sample that causes the neural network to recognize it incorrectly by adding minimal noise to the original sample. However, the disadvantage is that it takes a long time to generate such an adversarial example. Therefore, in some cases, an attack may be necessary that quickly causes the neural network to recognize it incorrectly. In this paper, we propose a fast misclassification sample that can rapidly attack neural networks. The proposed method does not consider the distortion of the original sample when adding noise. We used MNIST and CIFAR10 as experimental data and Tensorflow as a machine learning library. Experimental results show that the fast misclassification sample generated by the proposed method can be generated with 50% and 80% reduced number of iterations for MNIST and CIFAR10, respectively, compared to the conventional Carlini method, and has 100% attack rate.

Thin Film Effects on Side Channel Signals (부 채널 신호에 대한 박막의 영향)

  • Sun, Y.B.
    • Journal of the Semiconductor & Display Technology
    • /
    • v.12 no.2
    • /
    • pp.51-56
    • /
    • 2013
  • Even if transmissions through normal channel between ubiquitous devices and terminal readers are encrypted, any extra sources of information retrieved from encrypting module can be exploited to figure out the key parameters, so called side channel attack. Since side channel attacks are based on statistical methods, making side channel signal weak or complex is the proper solution to prevent the attack. Among many countermeasures, shielding the electromagnetic signal and adding noise to the EM signal were examined by applying different thicknesses of thin films of ferroelectric (BTO) and conductors (copper and gold). As a test vehicle, chip antenna was utilized to see the change in radiation characteristics: return loss and gain. As a result, the ferroelectric BTO showed no recognizable effect on both shielding and adding noise. Cu thin film showed increasing shielding effect with thickness. Nanometer Au exhibited possibility in adding noise by widening of bandwidth and red shifting of resonating frequencies.

Study on Low noise, High Performance Automobile Cooling Fan Development Using Freewake and CFD Analysis (자유후류법과 CFD 해석을 통한 저소음 고효율 자동차용 냉각팬 개발에 관한 연구)

  • ;;Renjing Cao
    • Proceedings of the Korean Society for Noise and Vibration Engineering Conference
    • /
    • 2004.05a
    • /
    • pp.847-847
    • /
    • 2004
  • Automobile cooling fans are operated with a radiator module. To design low noise, high performance cooling fan, radiator resistance should be considered in the design process. The system (radiator) resistance reduces axial velocity and increases effective angle of attack. This increasing effective angle of attack mechanism causes blade stall, performance decrease and noise increase. In this paper, To analyze fan performance, freewake and 3D CFD calculations are used To design high performance fan with consideration of system resistance, optimal twist concept is applied through momentum and blade element theory. To predict fan noise, empirical formula and acoustic analogy methods are used.

  • PDF

Detecting a Relay Attack with a Background Noise (소리를 이용한 릴레이 공격 공격의 탐지)

  • Kim, Jonguk;Kang, Sukin;Hong, Manpyo
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.23 no.4
    • /
    • pp.617-627
    • /
    • 2013
  • Wireless communication technology such as NFC and RFID makes the data transfer between devices much easier. Instead of the irksome typing of passwords, users are able to simply authenticate themselves with their smart cards or smartphones. Relay attack, however, threatens the security of token-based, something-you-have authentication recently. It efficiently attacks the authentication system even if the system has secure channels, and moreover it is easy to deploy. Distance bounding or localization of two devices has been proposed to detect relay attacks. We describe the disadvantages and weakness of existing methods and propose a new way to detect relay attacks by recording a background noise.

Research of a Method of Generating an Adversarial Sample Using Grad-CAM (Grad-CAM을 이용한 적대적 예제 생성 기법 연구)

  • Kang, Sehyeok
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.6
    • /
    • pp.878-885
    • /
    • 2022
  • Research in the field of computer vision based on deep learning is being actively conducted. However, deep learning-based models have vulnerabilities in adversarial attacks that increase the model's misclassification rate by applying adversarial perturbation. In particular, in the case of FGSM, it is recognized as one of the effective attack methods because it is simple, fast and has a considerable attack success rate. Meanwhile, as one of the efforts to visualize deep learning models, Grad-CAM enables visual explanation of convolutional neural networks. In this paper, I propose a method to generate adversarial examples with high attack success rate by applying Grad-CAM to FGSM. The method chooses fixels, which are closely related to labels, by using Grad-CAM and add perturbations to the fixels intensively. The proposed method has a higher success rate than the FGSM model in the same perturbation for both targeted and untargeted examples. In addition, unlike FGSM, it has the advantage that the distribution of noise is not uniform, and when the success rate is increased by repeatedly applying noise, the attack is successful with fewer iterations.

Performance Improvement of Power Analysis Attacks based on Wavelet De-noising (웨이블릿 잡음 제거 방법을 이용한 전력 분석 공격 성능 개선)

  • Kim, Wan-Jin;Song, Kyoung-Won;Lee, Yu-Ri;Kim, Ho-Won;Kim, Hyoung-Nam
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.35 no.9B
    • /
    • pp.1330-1341
    • /
    • 2010
  • Power analysis (PA) is known as a powerful physical attack method in the field of information security. This method uses the statistical characteristics of leaked power consumption signals measured from security devices to reveal the secret keys. However, when measuring a leakage power signal, it may be easily distorted by the noise due to its low magnitude values, and thus the PA attack shows different performances depending on the noise level of the measured signal. To overcome this vulnerability of the PA attack, we propose a noise-reduction method based on wavelet de-noising. Experimental results show that the proposed de-noising method improves the attack efficiency in terms of the number of signals required for the successful attack as well as the reliability on the guessing key.

Attack Detection on Images Based on DCT-Based Features

  • Nirin Thanirat;Sudsanguan Ngamsuriyaroj
    • Asia pacific journal of information systems
    • /
    • v.31 no.3
    • /
    • pp.335-357
    • /
    • 2021
  • As reproduction of images can be done with ease, copy detection has increasingly become important. In the duplication process, image modifications are likely to occur and some alterations are deliberate and can be viewed as attacks. A wide range of copy detection techniques has been proposed. In our study, content-based copy detection, which basically applies DCT-based features for images, namely, pixel values, edges, texture information and frequency-domain component distribution, is employed. Experiments are carried out to evaluate robustness and sensitivity of DCT-based features from attacks. As different types of DCT-based features hold different pieces of information, how features and attacks are related can be shown in their robustness and sensitivity. Rather than searching for proper features, use of robustness and sensitivity is proposed here to realize how the attacked features have changed when an image attack occurs. The experiments show that, out of ten attacks, the neural networks are able to detect seven attacks namely, Gaussian noise, S&P noise, Gamma correction (high), blurring, resizing (big), compression and rotation with mostly related to their sensitive features.