1 |
Kwon, Hyun, et al. "Multi-targeted adversarial example in evasion attack on deep neural network." IEEE Access 6 (2018): 46084-46096.
DOI
|
2 |
Carlini, Nicholas, and David Wagner. "Audio adversarial examples: Targeted attacks on speech-to-text." 2018 IEEE Security and Privacy Workshops (SPW). IEEE, 2018.
|
3 |
Jin, Di, et al. "Is bert really robust? a strong baseline for natural language attack on text classification and entailment." Proceedings of the AAAI conference on artificial intelligence. Vol. 34. No. 05. 2020.
|
4 |
Carlini, Nicholas, and David Wagner. "Audio adversarial examples: Targeted attacks on speech-to-text." 2018 IEEE Security and Privacy Workshops (SPW). IEEE, 2018.
|
5 |
Kwon, Hyun, Hyunsoo Yoon, and Ki-Woong Park. "Acoustic-decoy: Detection of adversarial examples through audio modification on speech recognition system." Neurocomputing 417 (2020): 357-370.
DOI
|
6 |
Kwon, Hyun, and Jun Lee. "AdvGuard: Fortifying Deep Neural Networks against Optimized Adversarial Example Attack." IEEE Access (2020).
|
7 |
Kwon, Hyun, et al. "Classification score approach for detecting adversarial example in deep neural network." Multimedia Tools and Applications (2020): 1-22.
|
8 |
Papernot, Nicolas, et al. "Practical black-box attacks against machine learning." Proceedings of the 2017 ACM on Asia conference on computer and communications security. 2017.
|
9 |
Wu, Aming, et al. "Untargeted adversarial attack via expanding the semantic gap." 2019 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 2019.
|
10 |
Kwon, Hyun, et al. "Random untargeted adversarial example on deep neural network." Symmetry 10.12 (2018): 738.
DOI
|
11 |
Moosavi-Dezfooli, Seyed-Mohsen, Alhussein Fawzi, and Pascal Frossard. "Deepfool: a simple and accurate method to fool deep neural networks." Proceedings of the IEEE conference on computer vision and pattern recognition. 2016.
|
12 |
Papernot, Nicolas, et al. "The limitations of deep learning in adversarial settings." 2016 IEEE European symposium on security and privacy (EuroS&P). IEEE, 2016.
|
13 |
Carlini, Nicholas, and David Wagner. "Towards evaluating the robustness of neural networks." 2017 ieee symposium on security and privacy (sp). IEEE, 2017.
|
14 |
He, Warren, Bo Li, and Dawn Song. "Decision boundary analysis of adversarial examples." International Conference on Learning Representations. 2018.
|
15 |
Samangouei, Pouya, Maya Kabkab, and Rama Chellappa. "Defense-gan: Protecting classifiers against adversarial attacks using generative models." arXiv preprint arXiv:1805.06605 (2018).
|
16 |
Athalye, Anish, Nicholas Carlini, and David Wagner. "Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples." International Conference on Machine Learning. PMLR, 2018.
|
17 |
Hang, Jie, et al. "Ensemble adversarial black-box attacks against deep learning systems." Pattern Recognition 101 (2020): 107184.
DOI
|
18 |
Kwon, Hyun, et al. "Advanced ensemble adversarial example on unknown deep neural network classifiers." IEICE TRANSACTIONS on Information and Systems 101.10 (2018): 2485-2500.
DOI
|
19 |
Meng, Dongyu, and Hao Chen. "Magnet: a two-pronged defense against adversarial examples." Proceedings of the 2017 ACM SIGSAC conference on computer and communications security. 2017.
|
20 |
Xu, Weilin, David Evans, and Yanjun Qi. "Feature squeezing: Detecting adversarial examples in deep neural networks." arXiv preprint arXiv:1704.01155 (2017).
|
21 |
Tramer, Florian, et al. "Ensemble adversarial training: Attacks and defenses." arXiv preprint arXiv:1705.07204 (2017).
|
22 |
Kwon, Hyun, and Jun Lee. "Diversity Adversarial Training against Adversarial Attack on Deep Neural Networks." Symmetry 13.3 (2021): 428.
DOI
|
23 |
Shumailov, Ilia, et al. "Towards certifiable adversarial sample detection." Proceedings of the 13th ACM Workshop on Artificial Intelligence and Security. 2020.
|
24 |
Carlini, Nicholas, and David Wagner. "Adversarial examples are not easily detected: Bypassing ten detection methods." Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security. 2017.
|
25 |
Kwon, Hyun, Hyunsoo Yoon, and Ki-Woong Park. "POSTER: Detecting audio adversarial example through audio modification." Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security. 2019.
|
26 |
Kwon, Hyun, et al. "Selective audio adversarial example in evasion attack on speech recognition system." IEEE Transactions on Information Forensics and Security 15 (2019): 526-538.
DOI
|
27 |
Kwon, Hyun, Hyunsoo Yoon, and Ki-Woong Park. "Robust CAPTCHA Image Generation Enhanced with Adversarial Example Methods." IEICE TRANSACTIONS on Information and Systems 103.4 (2020): 879-882.
|
28 |
Tramer, Florian, et al. "On adaptive attacks to adversarial example defenses." arXiv preprint arXiv:2002.08347 (2020).
|
29 |
Zhang, Guoming, et al. "Dolphinattack: Inaudible voice commands." Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. 2017.
|
30 |
Carlini, Nicholas, et al. "Hidden voice commands." 25th {USENIX} Security Symposium ({USENIX} Security 16). 2016.
|
31 |
Garg, Siddhant, and Goutham Ramakrishnan. "Bae: Bert-based adversarial examples for text classification." arXiv preprint arXiv:2004.01970 (2020).
|
32 |
Finlayson, Samuel G., et al. "Adversarial attacks against medical deep learning systems." arXiv preprint arXiv:1804.05296 (2018).
|
33 |
Carlini, Nicholas, et al. "Provably minimally-distorted adversarial examples." arXiv preprint arXiv:1709.10207 (2017).
|
34 |
Athalye, Anish, and Nicholas Carlini. "On the robustness of the cvpr 2018 white-box adversarial example defenses." arXiv preprint arXiv:1804.03286 (2018).
|
35 |
Jiang, Linxi, et al. "Black-box adversarial attacks on video recognition models." Proceedings of the 27th ACM International Conference on Multimedia. 2019.
|
36 |
Zhao, Yuhang, et al. "An Universal Perturbation Generator for Black-Box Attacks Against Object Detectors." International Conference on Smart Computing and Communication. Springer, Cham, 2019.
|
37 |
Huang, Qian, et al. "Enhancing adversarial example transferability with an intermediate level attack." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2019.
|
38 |
Liu, Weibo, et al. "A survey of deep neural network architectures and their applications." Neurocomputing 234 (2017): 11-26.
DOI
|
39 |
Parvathy, Velmurugan Subbiah, Sivakumar Pothiraj, and Jenyfal Sampson. "Optimal Deep Neural Network model based multimodality fused medical image classification." Physical Communication 41 (2020): 101119.
DOI
|
40 |
Jahangir, Rashid, et al. "Text-independent speaker identification through feature fusion and deep neural network." IEEE Access 8 (2020): 32187-32202.
DOI
|
41 |
Xue, Mingfu, et al. "Machine learning security: Threats, countermeasures, and evaluations." IEEE Access 8 (2020): 74720-74742.
DOI
|
42 |
Akhtar, Naveed, and Ajmal Mian. "Threat of adversarial attacks on deep learning in computer vision: A survey." Ieee Access 6 (2018): 14410-14430.
DOI
|
43 |
Szegedy, Christian, et al. "Intriguing properties of neural networks." arXiv preprint arXiv:1312.6199 (2013).
|
44 |
Liang, Bin, et al. "Detecting adversarial image examples in deep neural networks with adaptive noise reduction." IEEE Transactions on Dependable and Secure Computing (2018).
|
45 |
Papernot, Nicolas, et al. "Distillation as a defense to adversarial perturbations against deep neural networks." 2016 IEEE symposium on security and privacy (SP). IEEE, 2016.
|
46 |
Ozbulak, Utku, Arnout Van Messem, and Wesley De Neve. "Impact of adversarial examples on deep learning models for biomedical image segmentation." International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, Cham, 2019.
|
47 |
Kwon, Hyun, Hyunsoo Yoon, and Ki-Woong Park. "Multi-targeted backdoor: Indentifying backdoor attack for multiple deep neural networks." IEICE Transactions on Information and Systems 103.4 (2020): 883-887.
|
48 |
Yang, Chaofei, et al. "Generative poisoning attack method against neural networks." arXiv preprint arXiv:1703.01340 (2017).
|
49 |
Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. CoRR, abs/1412.6572, 2014
|
50 |
Su, Jiawei, Danilo Vasconcellos Vargas, and Kouichi Sakurai. "One pixel attack for fooling deep neural networks." IEEE Transactions on Evolutionary Computation 23.5 (2019): 828-841.
DOI
|