DOI QR코드

DOI QR Code

Adversarial Machine Learning: A Survey on the Influence Axis

  • Alzahrani, Shahad (Department of Computer Science, College of Computers and Information Technology, Taif University) ;
  • Almalki, Taghreed (Department of Computer Science, College of Computers and Information Technology, Taif University) ;
  • Alsuwat, Hatim (Department of Computer Science, College of Computer and Information Systems, Umm Al-Qura University) ;
  • Alsuwat, Emad (Department of Computer Science, College of Computers and Information Technology, Taif University)
  • Received : 2022.05.05
  • Published : 2022.05.30

Abstract

After the everyday use of systems and applications of artificial intelligence in our world. Consequently, machine learning technologies have become characterized by exceptional capabilities and unique and distinguished performance in many areas. However, these applications and systems are vulnerable to adversaries who can be a reason to confer the wrong classification by introducing distorted samples. Precisely, it has been perceived that adversarial examples designed throughout the training and test phases can include industrious Ruin the performance of the machine learning. This paper provides a comprehensive review of the recent research on adversarial machine learning. It's also worth noting that the paper only examines recent techniques that were released between 2018 and 2021. The diverse systems models have been investigated and discussed regarding the type of attacks, and some possible security suggestions for these attacks to highlight the risks of adversarial machine learning.

Keywords

References

  1. Puiutta, E., & Veith, E. M. (2020, August). Explainable reinforcement learning: A survey. In International Cross-Domain Conference for Machine Learning and Knowledge Extraction (pp. 77-95). Springer, Cham.
  2. Gu, R., Niu, C., Wu, F., Chen, G., Hu, C., Lyu, C., & Wu, Z. (2021). From Server-Based to Client-Based Machine Learning: A Comprehensive Survey. ACM Computing Surveys (CSUR), 54(1), 1-36.
  3. Faria, J. M. (2018, February). Machine learning safety: An overview. In Proceedings of the 26th Safety-Critical Systems Symposium, York, UK (pp. 6-8).
  4. Sen, P. C., Hajra, M., & Ghosh, M. (2020). Supervised classification algorithms in machine learning: A survey and review. In Emerging technology in modeling and graphics (pp. 99-111). Springer, Singapore.
  5. Gu, R., Niu, C., Wu, F., Chen, G., Hu, C., Lyu, C., & Wu, Z. (2021). From Server-Based to Client-Based Machine Learning: A Comprehensive Survey. ACM Computing Surveys (CSUR), 54(1), 1-36.
  6. Sen, P. C., Hajra, M., & Ghosh, M. (2020). Supervised classification algorithms in machine learning: A survey and review. In Emerging technology in modeling and graphics (pp. 99-111). Springer, Singapore.
  7. Gu, R., Niu, C., Wu, F., Chen, G., Hu, C., Lyu, C., & Wu, Z. (2021). From Server-Based to Client-Based Machine Learning: A Comprehensive Survey. ACM Computing Surveys (CSUR), 54(1), 1-36.
  8. Puiutta, E., & Veith, E. M. (2020, August). Explainable reinforcement learning: A survey. In International Cross-Domain Conference for Machine Learning and Knowledge Extraction (pp. 77-95). Springer, Cham.
  9. Sequeira, P., Gervasio, M.: Interestingness elements for explainable reinforcement learning: Understanding agents' capabilities and limitations (2019)
  10. Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou, I., Huang, A., Guez, A., Hubert, T., Baker, L., Lai, M., Bolton, A., et al.: Mastering the game of go without human knowledge. nature 550(7676), 354-359 (2017) https://doi.org/10.1038/nature24270
  11. Kober, J., Bagnell, J.A., Peters, J.: Reinforcement learning in robotics: A survey. The International Journal of Robotics Research 32(11), 1238-1274 (2013) https://doi.org/10.1177/0278364913495721
  12. Arel, I., Liu, C., Urbanik, T., Kohls, A.: Reinforcement learning-based multi-agent system for network traffic signal control. IET Intelligent Transport Systems 4(2), 128 (2010) https://doi.org/10.1049/iet-its.2009.0070
  13. Kaelbling, L.P., Littman, M.L., Moore, A.W.: Reinforcement learning: A survey (1996)
  14. Shai Danziger, Jonathan Levav, and Liora Avnaim-Pesso. 2011. Extraneous factors in judicial decisions. Proceedings of the National Academy of Sciences 108, 17 (2011), 6889-6892. https://doi.org/10.1073/pnas.1018033108
  15. Anne O'Keeffe and Michael McCarthy. 2010. The Routledge handbook of corpus linguistics. Routledge.
  16. Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine bias. In Ethics of Data and Analytics (pp. 254-264). Auerbach Publications.
  17. Cathy O'Neil. 2016. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown Publishing Group, New York, NY, USA.
  18. Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., & Galstyan, A. (2019). A survey on bias and fairness in machine learning. arXiv preprint arXiv:1908.09635.
  19. Pacheco, Y., & Sun, W. (2021). Adversarial Machine Learning: A Comparative Study on Contemporary Intrusion Detection Datasets.
  20. Laskov, P. and Lippmann, R., 2010. Machine learning in adversarial environments.
  21. Shirazi, S. H. A. A Survey on Adversarial Machine Learning.
  22. "Responsible AI Practices." [Online]. Available: https://ai.google/responsibilities/responsible-aipractices/?category=security
  23. "Securing the Future of AI and ML at Mi- Microsoft." [Online]. Available: https://docs.microsoft.com/en-us/security/securingartificial-intelligence-machine-learning
  24. "Adversarial Machine Learning," Jul 2016. [Online]. Available: https://ibm.co/36fhajg
  25. S. A. Gartner Inc, "Anticipate Data Manipulation Security Risks to AI Pipelines." [Online]. Available: https://www.gartner.com/doc/3899783
  26. Kumar, R. S. S., Nystrom, M., Lambert, J., Marshall, A., Goertzel, M., Comissoneru, A., ... & Xia, S. (2020, May). Adversarial machine learning-industry perspectives. In 2020 IEEE Security and Privacy Workshops (SPW) (pp. 69-75). IEEE.
  27. Wang, X., Li, J., Kuang, X., Tan, Y. A., & Li, J. (2019). The security of machine learning in an adversarial setting: A survey. Journal of Parallel and Distributed Computing, 130, 12-23. https://doi.org/10.1016/j.jpdc.2019.03.003
  28. L. Qin, Y. Guo, W. Jie, G. Wang, Effective query grouping strategy in clouds, J. Comput. Sci. Technol. 32 (6) (2017) 1231-1249. https://doi.org/10.1007/s11390-017-1797-9
  29. L. Qin, G. Wang, L. Feng, S. Yang, W. Jie, Preserving privacy with probabilistic indistinguishability in weighted social networks, IEEE Trans. Parallel Distrib. Syst. 28 (5) (2017) 1417-1429. https://doi.org/10.1109/TPDS.2016.2615020
  30. Wang, X., Li, J., Kuang, X., Tan, Y. A., & Li, J. (2019). The security of machine learning in an adversarial setting: A survey. Journal of Parallel and Distributed Computing, 130, 12-23. https://doi.org/10.1016/j.jpdc.2019.03.003
  31. Liu, Q., Li, P., Zhao, W., Cai, W., Yu, S., & Leung, V. C. (2018). A survey on security threats and defensive techniques of machine learning: A data-driven view. IEEE Access, 6, 12103-12117. https://doi.org/10.1109/access.2018.2805680
  32. Duddu, V. (2018). A survey of adversarial machine learning in cyber warfare. Defense Science Journal, 68(4), 356. https://doi.org/10.14429/dsj.68.12371
  33. Pitropakis, N., Panaousis, E., Giannetsos, T., Anastasiadis, E., & Loukas, G. (2019). A taxonomy and survey of attacks against machine learning. Computer Science Review, 34, 100199. https://doi.org/10.1016/j.cosrev.2019.100199
  34. Huang Xiao. 2017. Adversarial and Secure Machine Learning. Ph.D. Dissertation. Universitat Munchen. https://mediatum.ub.tum.de/133544804dfevereirode2019.
  35. Shirazi, S. H. A. A Survey on Adversarial Machine Learning.
  36. N. Dalvi, P. Domingos, S. Sanghai, D. Verma, et al., Adversarial classification, in Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ACM, 2004, pp. 99-108.
  37. Khasawneh, K.N., Abu-Ghazaleh, N., Ponomarev, D. and Yu, L., 2017, October. RHMD: evasion-resilient hardware malware detectors. In Proceedings of the 50th Annual IEEE/ACM International Symposium on Microarchitecture (pp. 315-327).
  38. L Huang, A. D. Joseph, B. Nelson, B. I. P. Rubinstein, and J. D. Tygar, "Adversarial machine learning," in Proc. 4th ACM Workshop Secure. Artif. Intell., 2011, pp. 43-58.
  39. Martins, N., Cruz, J. M., Cruz, T., & Abreu, P. H. (2020). Adversarial machine learning applied to intrusion and malware scenarios: a systematic review. IEEE Access, 8, 35403-35419. https://doi.org/10.1109/access.2020.2974752
  40. Sayghe, A., Zhao, J., & Konstantinou, C. (2020, August). Evasion attacks with deep adversarial learning against power system state estimation. In 2020 IEEE Power & Energy Society General Meeting (PESGM) (pp. 1-5). IEEE.
  41. Qian, Y., Ma, D., Wang, B., Pan, J., Wang, J., Gu, Z., ... & Lei, J. (2020). Spot evasion attacks: Adversarial examples for license plate recognition systems with convolutional neural networks. Computers & Security, 95, 101826. https://doi.org/10.1016/j.cose.2020.101826
  42. Sethi, T. S., & Kantardzic, M. (2018). Data-driven exploratory attacks on black-box classifiers in adversarial domains. Neurocomputing, 289, 129-143. https://doi.org/10.1016/j.neucom.2018.02.007
  43. Ayub, M. A., Johnson, W. A., Talbert, D. A., & Siraj, A. (2020, March). Model Evasion Attack on Intrusion Detection Systems using Adversarial Machine Learning. In 2020 54th Annual Conference on Information Sciences and Systems (CISS) (pp. 1-6). IEEE.
  44. Alhajjar, E., Maxwell, P., & Bastian, N. D. (2020). Adversarial Machine Learning in Network Intrusion Detection Systems. arXiv preprint arXiv:2004.11898.
  45. Kwon, H., Kim, Y., Park, K. W., Yoon, H., & Choi, D. (2018). Multi-targeted adversarial example in evasion attack on deep neural network. IEEE Access, 6, 46084-46096. https://doi.org/10.1109/access.2018.2866197
  46. Herath, J. D., Yang, P., & Yan, G. (2021). Real-Time Evasion Attacks against Deep Learning-Based Anomaly Detection from Distributed System Logs.
  47. Kim, B., Sagduyu, Y. E., Erpek, T., Davaslioglu, K., & Ulukus, S. (2020). Adversarial attacks with multiple antennas against deep learning-based modulation classifiers. arXiv preprint arXiv:2007.16204.
  48. Lin, X., Zhou, C., Yang, H., Wu, J., Wang, H., Cao, Y., & Wang, B. (2020, November). Exploratory Adversarial Attacks on Graph Neural Networks. In 2020 IEEE International Conference on Data Mining (ICDM) (pp. 1136-1141). IEEE.
  49. Flowers, B., Buehrer, R. M., & Headley, W. C. (2019). Evaluating adversarial evasion attacks in the context of wireless communications. IEEE Transactions on Information Forensics and Security, 15, 1102-1113. https://doi.org/10.1109/tifs.2019.2934069
  50. Wang, Y., Han, Y., Bao, H., Shen, Y., Ma, F., Li, J., & Zhang, X. (2020, August). Attack ability Characterization of Adversarial Evasion Attack on Discrete Data. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (pp. 1415-1425).
  51. Lee, D., Kim, H., & Ryou, J. (2020, February). Evasion Attack in Show and Tell Model. In 2020 22nd International Conference on Advanced Communication Technology (ICACT) (pp. 181-184). IEEE.
  52. Tolpegin, V., Truex, S., Gursoy, M. E., & Liu, L. (2020, September). Data poisoning attacks against federated learning systems. In European Symposium on Research in Computer Security (pp. 480-501). Springer, Cham.
  53. Truong, L., Jones, C., Hutchinson, B., August, A., Praggastis, B., Jasper, R., ... & Tuor, A. (2020). Systematic evaluation of backdoor data poisoning attacks on image classifiers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (pp. 788-789).
  54. Alsuwat, E., Alsuwat, H., Valtorta, M., & Farkas, C. (2020). Adversarial data poisoning attacks against the pc learning algorithm. International Journal of General Systems, 49(1), 3-31. https://doi.org/10.1080/03081079.2019.1630401
  55. Lee, D., Kim, H., & Ryou, J. (2020, February). Poisoning Attack on Show and Tell Model and Defense Using Autoencoder in Electric Factory. In 2020 IEEE International Conference on Big Data and Smart Computing (BigComp) (pp. 538-541). IEEE.
  56. Zugner, D., Borchert, O., Akbarnejad, A., & Guennemann, S. (2020). Adversarial attacks on graph neural networks: Perturbations and their patterns. ACM Transactions on Knowledge Discovery from Data (TKDD), 14(5), 1-31.
  57. Shi, Y., Erpek, T., Sagduyu, Y. E., & Li, J. H. (2018, October). Spectrum data poisoning with deep adversarial learning. In MILCOM 2018-2018 IEEE Military Communications Conference (MILCOM) (pp. 407-412). IEEE.
  58. Munoz-Gonzalez, L., Pfitzner, B., Russo, M., Carnerero-Cano, J., & Lupu, E. C. (2019). Poisoning attacks with generative adversarial nets. arXiv preprint arXiv:1906.07773.
  59. Dineen, J., Haque, A. S. M., & Bielskas, M. (2021). Reinforcement Learning For Data Poisoning on Graph Neural Networks. arXiv preprint arXiv:2102.06800.
  60. Chan, P. P., He, Z., Hu, X., Tsang, E. C., Yeung, D. S., & Ng, W. W. (2021). Causative label flip attack detection with data complexity measures. International Journal of Machine Learning and Cybernetics, 12(1), 103-116. https://doi.org/10.1007/s13042-020-01159-7
  61. Zhou, X., Xu, M., Wu, Y., & Zheng, N. (2021). Deep Model Poisoning Attack on Federated Learning. Future Internet, 13(3), 73. https://doi.org/10.3390/fi13030073
  62. Weilin Xu, David Evans, and Yanjun Qi. 2018. Feature Squeezing: Detecting adversarial examples in deep neural networks. Network and Distributed Systems Security Symposium (NDSS) 2018 (2018). https://doi.org/10.14722/ndss.2018.23198 [196]
  63. Ziang Yan, Yiwen Guo, and Changshui Zhang. 2018. Deep Defense: Training DNNs with improved adversarial robustness. In Advances in Neural Information Processing Systems. 419-428. [197]
  64. Yuzhe Tang, Guo Zhang, Dina Katabi, and Zhi Xu. 2019. ME-Net: Towards Effective Adversarial Robustness with Matrix Estimation. arXiv preprint arXiv:1905.11971 (2019)