DOI QR코드

DOI QR Code

Addressing Emerging Threats: An Analysis of AI Adversarial Attacks and Security Implications

  • HoonJae Lee (Dept. Information Security, Dongseo University) ;
  • ByungGook Lee (Dept. Computer Engineering, Dongseo University)
  • Received : 2024.04.16
  • Accepted : 2024.04.28
  • Published : 2024.06.30

Abstract

AI technology is a central focus of the 4th Industrial Revolution. However, compared to some existing non-artificial intelligence technologies, new AI adversarial attacks have become possible in learning data management, input data management, and other areas. These attacks, which exploit weaknesses in AI encryption technology, are not only emerging as social issues but are also expected to have a significant negative impact on existing IT and convergence industries. This paper examines various cases of AI adversarial attacks developed recently, categorizes them into five groups, and provides a foundational document for developing security guidelines to verify their safety. The findings of this study confirm AI adversarial attacks that can be applied to various types of cryptographic modules (such as hardware cryptographic modules, software cryptographic modules, firmware cryptographic modules, hybrid software cryptographic modules, hybrid firmware cryptographic modules, etc.) incorporating AI technology. The aim is to offer a foundational document for the development of standardized protocols, believed to play a crucial role in rejuvenating the information security industry in the future.

Keywords

Acknowledgement

This work was supported by Dongseo University, "Dongseo Cluster Project" Research Fund of 2023 (DSU-20230004).

References

  1. Kevin Eykholt et al., "Robust physical-world attacks in deep learning Visual Classification," IEEE CS(conference on CVPR 2018), pp.1625-1634. DOI: 10.1109/CVPR.2018.00175.
  2. Kevin Eykholt , Ivan Evtimov , Earlence Fernandes , Bo Li, "Physical adversarial examples for object detectors." 12th USENIX Workshop on Offensive Technologies (WOOT18), 2018. https://doi.org/10.48550/arXiv.1807.07769
  3. Tom B. Brown, Dandelion Mane , Aurko Roy, Martin Abadi , Justin Gilmer, "Adversarial Patch," Google, Dec 2017, NIPS 2017. https://doi.org/10.48550/arXiv.1807.07769
  4. JUNSIK HWANG, Adversarial Attack, https://jsideas.net/Adversarial_Attack/ , 2020.
  5. Ian Goodfellow , Joonathan Shlens , and Christian Szegedy , "Explaining and harnessing adversarial examples", ICLR2015. https://doi.org/10.48550/arXiv.1412.6572
  6. YouTube channel: "This AI-generated Joe Rogan fake has to be heard to be believed" https://www.youtube.com/watch?time_continue=35&v=i7QNUZWS6VE&feature=emb_title
  7. YouTube Video: "This AI lets you deepfake your voice to speak like Barack Obama" https://www.youtube.com/watch?v=i7QNUZWS6VE
  8. Seyed -Mohsen Moosavi-Dezfooli , Alhussein Fawzi , Pascal Frossard , " DeepFool : a simple and accurate method to fool deep neural networks", IEEE CVPR2016. https://doi.org/10.48550/arXiv.1511.04599
  9. Pin-Yu Chen, Huan Zhang, Yash Sharma, Jinfeng Yi, Cho-Jui Hsieh, "ZOO: Zeroth Order Optimization Based Black-box Attacks to Deep Neural Networks without Training Substitute Models", ACM AISec2017. https://doi.org/10.48550/arXiv.1708.03999
  10. Wieland Brendel , Jonas Rauber , and Matthias Bethge , "Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models," ICLR2018. https://doi.org/10.48550/arXiv.1712.04248
  11. Alexey Kurakin , Ian J. Goodfellow , Samy Bengio , "Adversarial examples in the physical world," ICLR2017. https://doi.org/10.48550/arXiv.1607.02533
  12. Anish Athalye , Logan Engstrom , Andrew Ilyas , and Kevin Kwok, "Synthesizing Robust Adversarial Examples," ICML 2018. https://doi.org/10.48550/arXiv.1707.07397
  13. Nicholas Carlini , David Wagner, " MagNet and "Efficient Defenses Against Adversarial Attacks" are Not Robust to Adversarial Examples," arxiv2017. https://doi.org/10.48550/arXiv.1707.06728
  14. Yash Sharma and Pin-Yu Chen, "Attacking the Madry Defense Model with L1-based Adversarial Examples," ICRL2018. https://doi.org/10.48550/arXiv.1710.10733
  15. Jonathan Uesato , Brendan O'Donoghue , Aaron van den Oord, Pushmeet Kohli , "Adversarial Risk and the Dangers of Evaluating Against Weak Attacks," ICML2018. https://doi.org/10.48550/arXiv.1802.05666
  16. Anish Athalye , Nicholas Carlini , and David Wagner, "Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples," ICML2018. https://doi.org/10.48550/arXiv.1802.00420
  17. Matt Fredrikson et.al, "Model Inversion attacks that Exploit Confidence Information and Basic Countermeasures," ACM (CCS'2015) https://dl.acm.org/doi/10.1145/2810103.2813677
  18. Nicolas Papernot , Patrick McDaniel, et. al., "The Limitations of Deep Learning in Adversarial Settings", IEEE S&P 2016. https://doi.org/10.48550/arXiv.1511.07528
  19. Tramer , F., Zhang, F., Juels , A., Reiter, MK, & Ristenpart , T. (2016), "Stealing machine learning models via prediction APIS.," In 25th USENIX Security Symposium (USENIXSecurity 16)(pp. 601-618). https://doi.org/10.48550/arXiv.1609.02943
  20. Shokri , R., Stronati , M., Song, C., & Shmatikov , V. (2017, May), "Membership inference attacks against machine learning models," In 2017 IEEE Symposium on Security and Privacy (SP) (pp. 3-18). https://doi.org/10.48550/arXiv.1610.05820
  21. Takemura , T., Yanai , N., & Fujiwara, T. (2020), "Model Extraction Attacks against Recurrent Neural Networks," arXiv preprint arXiv:2002.00123. https://doi.org/10.48550/arXiv.2002.00123
  22. Atli , B.G., Szyller , S., Juuti , M., Marchal , S., & Asokan , N. (2019), "Extraction of Complex DNN Models: Real Threat or Boogeyman?, " arXiv preprint arXiv:1910.05429 . https://doi.org/10.48550/arXiv.1910.05429
  23. Papernot , N., McDaniel, P., Goodfellow , I., Jha , S., Celik , ZB, Swami, A., "Practical black-box attacks against machine learning," Proceedings of the 2017 ACM on Asia conference on computer and communications security. pp. 506-519. ACM ( 2017). https://doi.org/10.48550/arXiv.1602.02697
  24. Orekondy , T., Schiele , B., Fritz, M., "Prediction poisoning: Utility-constrained defenses against model stealing attacks," International Conference on Representation Learning (ICLR) (2020), https://arxiv.org/abs/1906.10908. https://doi.org/10.48550/arXiv.1906.10908
  25. Lee, T., Edwards, B., Molloy, I., Su, D., "Defending against model stealing attacks using deceptive perturbations, " arXiv preprint arXiv:1806.00054 (2018). https://doi.org/10.48550/arXiv.1806.00054
  26. Orekondy , T., Schiele , B., Fritz, M., "Knockoff nets: Stealing functionality of black box models," CVPR (2019). https://doi.org/10.48550/arXiv.1812.02766
  27. http://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist.
  28. Dan Iter , Jade Huang, Mike Jermann , "Generating Adversarial Examples for Speech Recognition", Technical Report, 2017
  29. Guoming Zhang, Chen Yan, Xiaoyu Ji, Taimin Zhang, Tianchen Zhang, Wenyuan Xu, " DolphinAtack : Inaudible Voice Commands ", ACM Conference on Computer and Communications Security (CCS) 2017. https://dl.acm.org/doi/10.1145/3133956.3134052
  30. Nirupam Roy, Sheng Shen, Haitham Hassanieh , and Romit Roy Choudhury, " Inaudible Voice Commands: The Long-Range Attack and Defense ", USENIX Conference on NSDI'2018.
  31. https://www.pcmag.com/news/371757/lasers-can-actually-hack-your-smart-speaker (by Michael Kan November 4, 2019)
  32. Takeshi sugaware et al., "Light Commands: Laser-Based Audio Injection Attacks on Voice-Controllable Systems," USENIX Security Symposium (Aug. 12-14, 2020)
  33. Chen Wang, Cong Shi, Yingying Chen, Yan Wang, Nitesh Saxena , " WearID : Wearable-Assisted Low-Effort Authentication to Voice Assistants using Cross-Domain Speech Similarity," CCS'2019. https://doi.org/10.48550/arXiv.2003.09083
  34. Yunmok Son, Hocheol Shin, Dongkwan Kim, Youngseok Park, Juhwan Noh, Kibum Choi, Jungwoo Choi, and Yongdae Kim, "Rocking drones with intentional sound noise on gyroscopic sensors," 24th USENIX Security Symposium (USENIX Security 15). 2015.
  35. Man Zhou et al., " PatternListener : Cracking Android Pattern Lock Using Acoustic Signals ", ACM CCS'2018. https://doi.org/10.48550/arXiv.1810.02242
  36. Peng Cheng et al., " SonarSnoop : Active Acoustic Side-Channel Attacks", International Journal of Information Security (2020) Vol. 19, pp.213-228. https://doi.org/10.48550/arXiv.1808.10250
  37. Ye Jia , Yu Zhang, Ron J. Weiss, Quan Wang, Jonathan Shen, Fei Ren, Zhifeng Chen, Patrick Nguyen, Ruoming Pang, Ignacio Lopez Moreno, Yonghui Wu, "Transfer Learning from Speaker Verification to Multispeaker Text-To -Speech Synthesis," Advances in Neural Information Processing Systems 31 (2018), 4485-4495.
  38. Mordechai Guri , " AiR -ViBeR : Exfiltrating Data from Air-Gapped Computers via Covert Surface ViBrAtIoNs ," arxiv.org (2020).
  39. Mordechai Guri , "POWER - SUPPLaY : Leaking Data from Air-Gapped Systems by Turning the Power-Supplies Into Speakers," arxiv.org, 2020. https://doi.org/10.48550/arXiv.2005.00395
  40. Mordechai Guri , Yosef Solewicz , Andrey Daidakulov , Yuval Elovici, "MOSQUITO : Covert Ultrasonic Transmissions between Two Air-Gapped Computers using Speaker-to-Speaker Communication," arxiv.org, 2018. https://doi.org/10.48550/arXiv.1803.03422
  41. Ilia Shumailov Laurent Simon Jeff Yan Ross Anderson , "Hearing your touch: A new acoustic side channel on smartphones," ArXiv.org, 2019. https://doi.org/10.48550/arXiv.1903.11137
  42. Abe Davis, Michael Rubinstein, Neal Wadhwa , Gautham J. Mysore, Fredo Durand, William T. Freeman , "The Visual Microphone: Passive Recovery of Sound from Video," ACM Transcations on Graphics, Vol.33, No.4 , 2014. https://dl.acm.org/doi/10.1145/2601097.2601119
  43. Qibin Yan, et al., " SurfingAttack : Interactive Hidden Attack on Voice Assistants Using Ultrasonic Guided Waves," NDSS2020.
  44. Yulong Cao, Chaowei Xiao, Benjamin Cyr, Yimeng Zhou, Won Park, SaraRampazzi , Qi Alfred Chen, Kevin Fu, and Z. Morley Mao, "Adversarial sensor attack on LiDAR-based perception in autonomous driving," Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security. 2019. https://doi.org/10.48550/arXiv.1907.06826
  45. Kubiak, I., Przybysz , A., & Musial, S. (2020), "Possibilities of Electromagnetic Penetration of Displays of Multifunction Devices," Computers, 9(3), 62. https://doi.org/10.3390/computers9030062