Adversarial Machine Learning: A Survey on the Influence Axis |
Alzahrani, Shahad
(Department of Computer Science, College of Computers and Information Technology, Taif University)
Almalki, Taghreed (Department of Computer Science, College of Computers and Information Technology, Taif University) Alsuwat, Hatim (Department of Computer Science, College of Computer and Information Systems, Umm Al-Qura University) Alsuwat, Emad (Department of Computer Science, College of Computers and Information Technology, Taif University) |
1 | Sen, P. C., Hajra, M., & Ghosh, M. (2020). Supervised classification algorithms in machine learning: A survey and review. In Emerging technology in modeling and graphics (pp. 99-111). Springer, Singapore. |
2 | Gu, R., Niu, C., Wu, F., Chen, G., Hu, C., Lyu, C., & Wu, Z. (2021). From Server-Based to Client-Based Machine Learning: A Comprehensive Survey. ACM Computing Surveys (CSUR), 54(1), 1-36. |
3 | S. A. Gartner Inc, "Anticipate Data Manipulation Security Risks to AI Pipelines." [Online]. Available: https://www.gartner.com/doc/3899783 |
4 | Pitropakis, N., Panaousis, E., Giannetsos, T., Anastasiadis, E., & Loukas, G. (2019). A taxonomy and survey of attacks against machine learning. Computer Science Review, 34, 100199. DOI |
5 | Weilin Xu, David Evans, and Yanjun Qi. 2018. Feature Squeezing: Detecting adversarial examples in deep neural networks. Network and Distributed Systems Security Symposium (NDSS) 2018 (2018). https://doi.org/10.14722/ndss.2018.23198 [196] DOI |
6 | Wang, X., Li, J., Kuang, X., Tan, Y. A., & Li, J. (2019). The security of machine learning in an adversarial setting: A survey. Journal of Parallel and Distributed Computing, 130, 12-23. DOI |
7 | L. Qin, Y. Guo, W. Jie, G. Wang, Effective query grouping strategy in clouds, J. Comput. Sci. Technol. 32 (6) (2017) 1231-1249. DOI |
8 | Wang, X., Li, J., Kuang, X., Tan, Y. A., & Li, J. (2019). The security of machine learning in an adversarial setting: A survey. Journal of Parallel and Distributed Computing, 130, 12-23. DOI |
9 | Liu, Q., Li, P., Zhao, W., Cai, W., Yu, S., & Leung, V. C. (2018). A survey on security threats and defensive techniques of machine learning: A data-driven view. IEEE Access, 6, 12103-12117. DOI |
10 | Huang Xiao. 2017. Adversarial and Secure Machine Learning. Ph.D. Dissertation. Universitat Munchen. https://mediatum.ub.tum.de/133544804dfevereirode2019. |
11 | N. Dalvi, P. Domingos, S. Sanghai, D. Verma, et al., Adversarial classification, in Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ACM, 2004, pp. 99-108. |
12 | Truong, L., Jones, C., Hutchinson, B., August, A., Praggastis, B., Jasper, R., ... & Tuor, A. (2020). Systematic evaluation of backdoor data poisoning attacks on image classifiers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (pp. 788-789). |
13 | Sethi, T. S., & Kantardzic, M. (2018). Data-driven exploratory attacks on black-box classifiers in adversarial domains. Neurocomputing, 289, 129-143. DOI |
14 | Khasawneh, K.N., Abu-Ghazaleh, N., Ponomarev, D. and Yu, L., 2017, October. RHMD: evasion-resilient hardware malware detectors. In Proceedings of the 50th Annual IEEE/ACM International Symposium on Microarchitecture (pp. 315-327). |
15 | Kim, B., Sagduyu, Y. E., Erpek, T., Davaslioglu, K., & Ulukus, S. (2020). Adversarial attacks with multiple antennas against deep learning-based modulation classifiers. arXiv preprint arXiv:2007.16204. |
16 | Sayghe, A., Zhao, J., & Konstantinou, C. (2020, August). Evasion attacks with deep adversarial learning against power system state estimation. In 2020 IEEE Power & Energy Society General Meeting (PESGM) (pp. 1-5). IEEE. |
17 | Ayub, M. A., Johnson, W. A., Talbert, D. A., & Siraj, A. (2020, March). Model Evasion Attack on Intrusion Detection Systems using Adversarial Machine Learning. In 2020 54th Annual Conference on Information Sciences and Systems (CISS) (pp. 1-6). IEEE. |
18 | Kwon, H., Kim, Y., Park, K. W., Yoon, H., & Choi, D. (2018). Multi-targeted adversarial example in evasion attack on deep neural network. IEEE Access, 6, 46084-46096. DOI |
19 | Herath, J. D., Yang, P., & Yan, G. (2021). Real-Time Evasion Attacks against Deep Learning-Based Anomaly Detection from Distributed System Logs. |
20 | Wang, Y., Han, Y., Bao, H., Shen, Y., Ma, F., Li, J., & Zhang, X. (2020, August). Attack ability Characterization of Adversarial Evasion Attack on Discrete Data. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (pp. 1415-1425). |
21 | Zugner, D., Borchert, O., Akbarnejad, A., & Guennemann, S. (2020). Adversarial attacks on graph neural networks: Perturbations and their patterns. ACM Transactions on Knowledge Discovery from Data (TKDD), 14(5), 1-31. |
22 | Dineen, J., Haque, A. S. M., & Bielskas, M. (2021). Reinforcement Learning For Data Poisoning on Graph Neural Networks. arXiv preprint arXiv:2102.06800. |
23 | Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine bias. In Ethics of Data and Analytics (pp. 254-264). Auerbach Publications. |
24 | Arel, I., Liu, C., Urbanik, T., Kohls, A.: Reinforcement learning-based multi-agent system for network traffic signal control. IET Intelligent Transport Systems 4(2), 128 (2010) DOI |
25 | Kaelbling, L.P., Littman, M.L., Moore, A.W.: Reinforcement learning: A survey (1996) |
26 | Anne O'Keeffe and Michael McCarthy. 2010. The Routledge handbook of corpus linguistics. Routledge. |
27 | Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., & Galstyan, A. (2019). A survey on bias and fairness in machine learning. arXiv preprint arXiv:1908.09635. |
28 | Shi, Y., Erpek, T., Sagduyu, Y. E., & Li, J. H. (2018, October). Spectrum data poisoning with deep adversarial learning. In MILCOM 2018-2018 IEEE Military Communications Conference (MILCOM) (pp. 407-412). IEEE. |
29 | Yuzhe Tang, Guo Zhang, Dina Katabi, and Zhi Xu. 2019. ME-Net: Towards Effective Adversarial Robustness with Matrix Estimation. arXiv preprint arXiv:1905.11971 (2019) |
30 | Puiutta, E., & Veith, E. M. (2020, August). Explainable reinforcement learning: A survey. In International Cross-Domain Conference for Machine Learning and Knowledge Extraction (pp. 77-95). Springer, Cham. |
31 | Pacheco, Y., & Sun, W. (2021). Adversarial Machine Learning: A Comparative Study on Contemporary Intrusion Detection Datasets. |
32 | Kober, J., Bagnell, J.A., Peters, J.: Reinforcement learning in robotics: A survey. The International Journal of Robotics Research 32(11), 1238-1274 (2013) DOI |
33 | Gu, R., Niu, C., Wu, F., Chen, G., Hu, C., Lyu, C., & Wu, Z. (2021). From Server-Based to Client-Based Machine Learning: A Comprehensive Survey. ACM Computing Surveys (CSUR), 54(1), 1-36. |
34 | "Responsible AI Practices." [Online]. Available: https://ai.google/responsibilities/responsible-aipractices/?category=security |
35 | Martins, N., Cruz, J. M., Cruz, T., & Abreu, P. H. (2020). Adversarial machine learning applied to intrusion and malware scenarios: a systematic review. IEEE Access, 8, 35403-35419. DOI |
36 | Gu, R., Niu, C., Wu, F., Chen, G., Hu, C., Lyu, C., & Wu, Z. (2021). From Server-Based to Client-Based Machine Learning: A Comprehensive Survey. ACM Computing Surveys (CSUR), 54(1), 1-36. |
37 | Lee, D., Kim, H., & Ryou, J. (2020, February). Evasion Attack in Show and Tell Model. In 2020 22nd International Conference on Advanced Communication Technology (ICACT) (pp. 181-184). IEEE. |
38 | Shirazi, S. H. A. A Survey on Adversarial Machine Learning. |
39 | Kumar, R. S. S., Nystrom, M., Lambert, J., Marshall, A., Goertzel, M., Comissoneru, A., ... & Xia, S. (2020, May). Adversarial machine learning-industry perspectives. In 2020 IEEE Security and Privacy Workshops (SPW) (pp. 69-75). IEEE. |
40 | Puiutta, E., & Veith, E. M. (2020, August). Explainable reinforcement learning: A survey. In International Cross-Domain Conference for Machine Learning and Knowledge Extraction (pp. 77-95). Springer, Cham. |
41 | Laskov, P. and Lippmann, R., 2010. Machine learning in adversarial environments. |
42 | "Adversarial Machine Learning," Jul 2016. [Online]. Available: https://ibm.co/36fhajg |
43 | Sequeira, P., Gervasio, M.: Interestingness elements for explainable reinforcement learning: Understanding agents' capabilities and limitations (2019) |
44 | Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou, I., Huang, A., Guez, A., Hubert, T., Baker, L., Lai, M., Bolton, A., et al.: Mastering the game of go without human knowledge. nature 550(7676), 354-359 (2017) DOI |
45 | Shai Danziger, Jonathan Levav, and Liora Avnaim-Pesso. 2011. Extraneous factors in judicial decisions. Proceedings of the National Academy of Sciences 108, 17 (2011), 6889-6892. DOI |
46 | Cathy O'Neil. 2016. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown Publishing Group, New York, NY, USA. |
47 | "Securing the Future of AI and ML at Mi- Microsoft." [Online]. Available: https://docs.microsoft.com/en-us/security/securingartificial-intelligence-machine-learning |
48 | L. Qin, G. Wang, L. Feng, S. Yang, W. Jie, Preserving privacy with probabilistic indistinguishability in weighted social networks, IEEE Trans. Parallel Distrib. Syst. 28 (5) (2017) 1417-1429. DOI |
49 | Duddu, V. (2018). A survey of adversarial machine learning in cyber warfare. Defense Science Journal, 68(4), 356. DOI |
50 | Lin, X., Zhou, C., Yang, H., Wu, J., Wang, H., Cao, Y., & Wang, B. (2020, November). Exploratory Adversarial Attacks on Graph Neural Networks. In 2020 IEEE International Conference on Data Mining (ICDM) (pp. 1136-1141). IEEE. |
51 | Flowers, B., Buehrer, R. M., & Headley, W. C. (2019). Evaluating adversarial evasion attacks in the context of wireless communications. IEEE Transactions on Information Forensics and Security, 15, 1102-1113. DOI |
52 | Tolpegin, V., Truex, S., Gursoy, M. E., & Liu, L. (2020, September). Data poisoning attacks against federated learning systems. In European Symposium on Research in Computer Security (pp. 480-501). Springer, Cham. |
53 | Alsuwat, E., Alsuwat, H., Valtorta, M., & Farkas, C. (2020). Adversarial data poisoning attacks against the pc learning algorithm. International Journal of General Systems, 49(1), 3-31. DOI |
54 | Lee, D., Kim, H., & Ryou, J. (2020, February). Poisoning Attack on Show and Tell Model and Defense Using Autoencoder in Electric Factory. In 2020 IEEE International Conference on Big Data and Smart Computing (BigComp) (pp. 538-541). IEEE. |
55 | Alhajjar, E., Maxwell, P., & Bastian, N. D. (2020). Adversarial Machine Learning in Network Intrusion Detection Systems. arXiv preprint arXiv:2004.11898. |
56 | Shirazi, S. H. A. A Survey on Adversarial Machine Learning. |
57 | L Huang, A. D. Joseph, B. Nelson, B. I. P. Rubinstein, and J. D. Tygar, "Adversarial machine learning," in Proc. 4th ACM Workshop Secure. Artif. Intell., 2011, pp. 43-58. |
58 | Qian, Y., Ma, D., Wang, B., Pan, J., Wang, J., Gu, Z., ... & Lei, J. (2020). Spot evasion attacks: Adversarial examples for license plate recognition systems with convolutional neural networks. Computers & Security, 95, 101826. DOI |
59 | Chan, P. P., He, Z., Hu, X., Tsang, E. C., Yeung, D. S., & Ng, W. W. (2021). Causative label flip attack detection with data complexity measures. International Journal of Machine Learning and Cybernetics, 12(1), 103-116. DOI |
60 | Munoz-Gonzalez, L., Pfitzner, B., Russo, M., Carnerero-Cano, J., & Lupu, E. C. (2019). Poisoning attacks with generative adversarial nets. arXiv preprint arXiv:1906.07773. |
61 | Zhou, X., Xu, M., Wu, Y., & Zheng, N. (2021). Deep Model Poisoning Attack on Federated Learning. Future Internet, 13(3), 73. DOI |
62 | Ziang Yan, Yiwen Guo, and Changshui Zhang. 2018. Deep Defense: Training DNNs with improved adversarial robustness. In Advances in Neural Information Processing Systems. 419-428. [197] |
63 | Sen, P. C., Hajra, M., & Ghosh, M. (2020). Supervised classification algorithms in machine learning: A survey and review. In Emerging technology in modeling and graphics (pp. 99-111). Springer, Singapore. |
64 | Faria, J. M. (2018, February). Machine learning safety: An overview. In Proceedings of the 26th Safety-Critical Systems Symposium, York, UK (pp. 6-8). |