과제정보
This thesis was supported by 'The Construction Project for Regional Base Information Security Cluster', grant funded by Ministry of Science, ICT and Busan Metropolitan City in 2024.
참고문헌
- Goodfellow, I., Shlens, J., & Szegedy, C. (2014). Explaining and Harnessing Adversarial Examples. https://doi.org/10.48550/arXiv.1412.6572
- Madry, A., Makelov, A., Schmidt, L., Tsipras, D., & Vladu, A. (2017). Towards Deep Learning Models Resistant to Adversarial Attacks. https://doi.org/10.48550/arXiv.1706.06083
- N. Carlini and D. Wagner, "Towards Evaluating the Robustness of Neural Networks," 2017 IEEE Symposium on Security and Privacy (SP), San Jose, CA, USA, 2017, pp. 39-57, doi: 10.1109/SP.2017.49. keywords: {Neural networks;Robustness;Measurement;Speech recognition;Security;Malware;Resists}
- Eykholt, K., Evtimov, I., Fernandes, E., et al. (2018). Robust Physical-World Attacks on Deep Learning Models.
- Rudin, C. (2019). Stop explaining black-box machine learning models for high-stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5), 206-215. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9122117/
- Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608.
- Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., & Pedreschi, D. (2018). A survey of methods for explaining black box models. ACM computing surveys (CSUR), 51(5), 1-42. https://doi.org/10.1145/3236009
- Madry, A., Makelov, A., Schmidt, L., Tsipras, D., & Vladu, A. (2017). Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083. https://doi.org/10.48550/arXiv.170 6.06083
- Binns, R. (2018). Fairness in machine learning: Lessons from political philosophy. Proceedings of the 2018 Conference on Fairness, Accountability, and Transparency.
- Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389-399.
- Brundage, M., Avin, S., Clark, J., Toner, H., Eckersley, P., Garfinkel, B., ... & Anderson, H. (2018). The malicious use of artificial intelligence: Forecasting, prevention, and mitigation. arXiv preprint arXiv:1802.07228. https://doi.org/10.48550/arXiv.1802.07228
- Farnoosh Heidarivincheh, Majid Mirmehdi, Dima Damen "Weakly-Supervised Completion Moment Detection using Temporal Attention" https://doi.org/10.48550/arXiv.1910.09920
- Finlayson, S. G., Bowers, J. D., Ito, J., Zittrain, J. L., Beam, A. L., & Kohane, I. S. (2019). "Adversarial attacks on medical machine learning." Science, 363(6433), 1287-1289. DOI: 10.1126/science.aaw4399
- Sharif, M., Bhagavatula, S., Bauer, L., & Reiter, M. K. (2016). "Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition." Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. https://doi.org/10.1145/2976749.2978392
- Moosavi-Dezfooli, S.M., et al. (2016). "DeepFool: A simple and accurate method to fool deep neural networks." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Described an iterative method for generating minimal adversarial perturbations.
- Moosavi-Dezfooli, S.M., et al. (2017). "Universal adversarial perturbations." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Proposed universal adversarial perturbations effective across different inputs.
- Papernot, N., et al. (2016). "The limitations of deep learning in adversarial settings." IEEE European Symposium on Security and Privacy (EuroS&P). Introduced the JSMA method and discussed its effectiveness and limitations.
- Solon, O. (2020). How AI's Interdisciplinary Research Approach Can Improve Security. The Guardian.
- European Union. (2018). General Data Protection Regulation (GDPR). Official Journal of the European Union.
- Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). "Why Should I Trust You?" Explaining the Predictions of Any Classifier. Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD). https://doi.org/10.1145/2939672.2939778
- Zhang, X., Zheng, Y., & Yang, H. (2020). Continuous Monitoring for Anomaly Detection in AI Systems. IEEE Transactions on Knowledge and Data Engineering.
- S. Vaudenay, "On the Security of Encryption Schemes," in Advances in Cryptology - CRYPTO 2001, Springer, 2001.
- Menezes, A.J., van Oorschot, P.C., & Vanstone, S.A. (1997). Handbook of Applied Cryptography (1st ed.). CRC Press. https://doi.org/10.1201/9780429466335
- W. Stallings, Computer Security: Principles and Practice, Pearson, 2020.
- D. Kahn, The Codebreakers: The Comprehensive History of Secret Communication from Ancient Times to the Internet, Scribner, 1996.
- M. G. Schwartz, Principles of Computer System Design: An Introduction, Morgan Kaufmann, 2004.
- G. Coulouris, J. Dollimore, and T. Kindberg, Distributed Systems: Concepts and Design, AddisonWesley, 2011.
- R. Anderson, Security Engineering: A Guide to Building Dependable Distributed Systems, Wiley, 2020.
- J. C. Brustoloni, "Role-Based Access Control in Large-Scale Distributed Systems," in Proceedings of the IEEE Symposium on Security and Privacy, 2005.
- P. Mell and T. Grance, The NIST Definition of Cloud Computing, National Institute of Standards and Technology, 2011.
- S. M. Bellovin and W. R. Cheswick, Firewalls and Internet Security: Repelling the Wily Hacker, Addison-Wesley, 2003.
- S. H. Kim and D. S. Kim, "Challenges and Strategies in Data Security and Privacy," International Journal of Information Security, vol. 12, no. 2, pp. 103-115, 2013.
- Goodfellow, I., Bengio, Y., & Courville, A. (2016). "Deep Learning." MIT Press.
- Szegedy, C., et al. (2014). "Intriguing Properties of Neural Networks." arXiv preprint.
- Zachary C. Lipton "The Mythos of Model Interpretability" https://doi.org/10.48550/arXiv.1606.03490
- McMahan, B., et al. (2017). "Communication-Efficient Learning of Deep Networks from Decentralized Data." AISTATS.
- Wang, L., et al. (2020). "Supply Chain Risks in AI: Identification and Mitigation." ACM Computing Surveys.
- Fredrikson, M., et al. (2015). "Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures." ACM CCS.
- Shokri, R., et al. (2017). "Membership Inference Attacks Against Machine Learning Models." IEEE S&P.