DOI QR코드

DOI QR Code

A Comprehensive Review of AI Security: Threats, Challenges, and Mitigation Strategies

  • Serdar Yazmyradov (Department of Computer Engineering, Dongseo University) ;
  • Hoon Jae Lee (Department Information Security of Dongseo University)
  • Received : 2024.10.21
  • Accepted : 2024.10.31
  • Published : 2024.11.30

Abstract

As Artificial Intelligence (AI) continues to permeate various sectors such as healthcare, finance, and transportation, the importance of securing AI systems against emerging threats has become increasingly critical. The proliferation of AI across these industries not only introduces opportunities for innovation but also exposes vulnerabilities that could be exploited by malicious actors. This comprehensive review delves into the current landscape of AI security, providing an in-depth analysis of the threats, challenges, and mitigation strategies associated with AI technologies. The paper discusses key threats such as adversarial attacks, data poisoning, and model inversion, all of which can severely compromise the integrity, confidentiality, and availability of AI systems. Additionally, the paper explores the challenges posed by the inherent complexity and opacity of AI models, particularly deep learning networks. The review also evaluates various mitigation strategies, including adversarial training, differential privacy, and federated learning, that have been developed to safeguard AI systems. By synthesizing recent advancements and identifying gaps in existing research, this paper aims to guide future efforts in enhancing the security of AI applications, ultimately ensuring their safe and ethical deployment in both critical and everyday environments.

Keywords

Acknowledgement

This thesis was supported by 'The Construction Project for Regional Base Information Security Cluster', grant funded by Ministry of Science, ICT and Busan Metropolitan City in 2024.

References

  1. Goodfellow, I., Shlens, J., & Szegedy, C. (2014). Explaining and Harnessing Adversarial Examples. https://doi.org/10.48550/arXiv.1412.6572
  2. Madry, A., Makelov, A., Schmidt, L., Tsipras, D., & Vladu, A. (2017). Towards Deep Learning Models Resistant to Adversarial Attacks. https://doi.org/10.48550/arXiv.1706.06083
  3. N. Carlini and D. Wagner, "Towards Evaluating the Robustness of Neural Networks," 2017 IEEE Symposium on Security and Privacy (SP), San Jose, CA, USA, 2017, pp. 39-57, doi: 10.1109/SP.2017.49. keywords: {Neural networks;Robustness;Measurement;Speech recognition;Security;Malware;Resists}
  4. Eykholt, K., Evtimov, I., Fernandes, E., et al. (2018). Robust Physical-World Attacks on Deep Learning Models.
  5. Rudin, C. (2019). Stop explaining black-box machine learning models for high-stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5), 206-215. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9122117/
  6. Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608.
  7. Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., & Pedreschi, D. (2018). A survey of methods for explaining black box models. ACM computing surveys (CSUR), 51(5), 1-42. https://doi.org/10.1145/3236009
  8. Madry, A., Makelov, A., Schmidt, L., Tsipras, D., & Vladu, A. (2017). Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083. https://doi.org/10.48550/arXiv.170 6.06083
  9. Binns, R. (2018). Fairness in machine learning: Lessons from political philosophy. Proceedings of the 2018 Conference on Fairness, Accountability, and Transparency.
  10. Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389-399.
  11. Brundage, M., Avin, S., Clark, J., Toner, H., Eckersley, P., Garfinkel, B., ... & Anderson, H. (2018). The malicious use of artificial intelligence: Forecasting, prevention, and mitigation. arXiv preprint arXiv:1802.07228. https://doi.org/10.48550/arXiv.1802.07228
  12. Farnoosh Heidarivincheh, Majid Mirmehdi, Dima Damen "Weakly-Supervised Completion Moment Detection using Temporal Attention" https://doi.org/10.48550/arXiv.1910.09920
  13. Finlayson, S. G., Bowers, J. D., Ito, J., Zittrain, J. L., Beam, A. L., & Kohane, I. S. (2019). "Adversarial attacks on medical machine learning." Science, 363(6433), 1287-1289. DOI: 10.1126/science.aaw4399
  14. Sharif, M., Bhagavatula, S., Bauer, L., & Reiter, M. K. (2016). "Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition." Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. https://doi.org/10.1145/2976749.2978392
  15. Moosavi-Dezfooli, S.M., et al. (2016). "DeepFool: A simple and accurate method to fool deep neural networks." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Described an iterative method for generating minimal adversarial perturbations.
  16. Moosavi-Dezfooli, S.M., et al. (2017). "Universal adversarial perturbations." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Proposed universal adversarial perturbations effective across different inputs.
  17. Papernot, N., et al. (2016). "The limitations of deep learning in adversarial settings." IEEE European Symposium on Security and Privacy (EuroS&P). Introduced the JSMA method and discussed its effectiveness and limitations.
  18. Solon, O. (2020). How AI's Interdisciplinary Research Approach Can Improve Security. The Guardian.
  19. European Union. (2018). General Data Protection Regulation (GDPR). Official Journal of the European Union.
  20. Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). "Why Should I Trust You?" Explaining the Predictions of Any Classifier. Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD). https://doi.org/10.1145/2939672.2939778
  21. Zhang, X., Zheng, Y., & Yang, H. (2020). Continuous Monitoring for Anomaly Detection in AI Systems. IEEE Transactions on Knowledge and Data Engineering.
  22. S. Vaudenay, "On the Security of Encryption Schemes," in Advances in Cryptology - CRYPTO 2001, Springer, 2001.
  23. Menezes, A.J., van Oorschot, P.C., & Vanstone, S.A. (1997). Handbook of Applied Cryptography (1st ed.). CRC Press. https://doi.org/10.1201/9780429466335
  24. W. Stallings, Computer Security: Principles and Practice, Pearson, 2020.
  25. D. Kahn, The Codebreakers: The Comprehensive History of Secret Communication from Ancient Times to the Internet, Scribner, 1996.
  26. M. G. Schwartz, Principles of Computer System Design: An Introduction, Morgan Kaufmann, 2004.
  27. G. Coulouris, J. Dollimore, and T. Kindberg, Distributed Systems: Concepts and Design, AddisonWesley, 2011.
  28. R. Anderson, Security Engineering: A Guide to Building Dependable Distributed Systems, Wiley, 2020.
  29. J. C. Brustoloni, "Role-Based Access Control in Large-Scale Distributed Systems," in Proceedings of the IEEE Symposium on Security and Privacy, 2005.
  30. P. Mell and T. Grance, The NIST Definition of Cloud Computing, National Institute of Standards and Technology, 2011.
  31. S. M. Bellovin and W. R. Cheswick, Firewalls and Internet Security: Repelling the Wily Hacker, Addison-Wesley, 2003.
  32. S. H. Kim and D. S. Kim, "Challenges and Strategies in Data Security and Privacy," International Journal of Information Security, vol. 12, no. 2, pp. 103-115, 2013.
  33. Goodfellow, I., Bengio, Y., & Courville, A. (2016). "Deep Learning." MIT Press.
  34. Szegedy, C., et al. (2014). "Intriguing Properties of Neural Networks." arXiv preprint.
  35. Zachary C. Lipton "The Mythos of Model Interpretability" https://doi.org/10.48550/arXiv.1606.03490
  36. McMahan, B., et al. (2017). "Communication-Efficient Learning of Deep Networks from Decentralized Data." AISTATS.
  37. Wang, L., et al. (2020). "Supply Chain Risks in AI: Identification and Mitigation." ACM Computing Surveys.
  38. Fredrikson, M., et al. (2015). "Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures." ACM CCS.
  39. Shokri, R., et al. (2017). "Membership Inference Attacks Against Machine Learning Models." IEEE S&P.