1 |
C. Szegedy, W. Zaremba, and I. Sutskever, "Intriguing properties of neural networks." International Conference on Learning Representations (ICLR), 2014.
|
2 |
I. Goodfellow, J. Shlens, and C. Szegedy, "Explaining and harnessing adversarial examples." International Conference on Learning Representations (ICLR), 2015.
|
3 |
S. Moosavi-Dezfooli, A. Fawzi, and P. Frossard, "DeepFool: a simple and accurate method to fool deep neural networks" 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2574-2582, Jun 2016.
|
4 |
N. Carlini and D. Wagner, "Towards evaluating the robustness of neural networks." 2017 IEEE Symposium on Security and Privacy (SP), pp. 39-57, May 2017.
|
5 |
A. Athalye, N. Carlini, and D. Wagner, "Synthesizing Robust Adversarial Examples" International Conference on Machine Learning (ICML), pp. 274-283, Jul. 2018.
|
6 |
J. Su and D. Vasconcellos et al., "One pixel attack for fooling deep neural networks", VOL. 23, NO. 5, pp. 828-841, Oct. 2019.
DOI
|
7 |
N. Papernot, P. McDaniel, I. Goodfellow et al. "Practical Black-Box Attacks against Machine Learning" 2017 ACM on Asia Conference on Computer and Communications Security, ASIA CCS'17, PP. 506-519, Apr. 2017.
|
8 |
N. Carlini and D. Wagner, "Audio Adversarial Examples: Targeted Attacks on Speech-to-Text" 2018 IEEE Security and Privacy Workshops (SPW), pp.1-7, May 2018.
|
9 |
M. Alzantot, B. Balaji, and M. Srivastava, "Did you hear that? Adversarial Examples Against Automatic Speech Recognition" Machine Deception Workshop, Neural Information Processing Systems (NIPS), 2017.
|
10 |
X. Yuan, Y. Chen, and Y. Zhao et al.,"Commandersong: A systematic approach for practical adversarial voice recognition" 27th USENIX Security Symposium, pp. 49-64, Aug. 2018.
|
11 |
C. Kereliuk, B. L. Sturm, and J. Larsen, "Deep Learning and Music Adversaries," IEEE Transactions on Multimedia, vol. 17, no. 11, pp. 2059-2071, Nov. 2015.
DOI
|
12 |
L. Schonherr, K. Kohls, and S. Zeiler et al., "Adversarial Attacks Against Automatic Speech Recognition Systems via Psychoacoustic Hiding" NDSS 2019.
|
13 |
H. Yakura and J. Sakuma, "Robust Audio Adversarial Example for a Physical Attack" 28th International Joint Conference on Artificial Intelligence pp. 5334-5341, 2019.
|
14 |
S. Hu, X. Shang, and Z. Qin et al., "Adversarial Examples for Automatic Speech Recognition: Attacks and Countermeasures" IEEE Communications Magazine, pp. 120-126, Oct. 2019
|
15 |
Simple audio recognition: Recognizing keywords. Accessed: September 13, 2021. [Online]. Available: https://www.tensorflow.org/tutorials/audio/simple_audio
|
16 |
P. Warden "Speech Commands: A Dataset for Limited-Vocabulary Speech Recognition" arXiv preprint arXiv:1804.
|
17 |
W. Brendel, J. Rauber, and M. Bethge, "Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models" International Conference on Learning Representations, May 2018.
|
18 |
P. Chen, Huan Zhang, and Y. Sharma et al., "ZOO: Zeroth Order Optimization based Black-box Attacks to Deep Neural Networks without Training Substitute Models" 2017 ACM SIGSAC Conference on Computer and Communications Security, pp. 15-26, Nov. 2017.
|
19 |
N. Papernot, P McDaniel, and S. Jha et al., "The limitations of deep learning in adversarial settings" 2016 IEEE European symposium on security and privacy (EuroS&P), pp. 372-387, Mar. 2016.
|
20 |
A. Athalye and I. Sutskever, "Synthesizing Robust Adversarial Examples," International Conference on Machine Learning (ICML), pp. 284-293, Jul. 2018.
|
21 |
J. Chen, M. Jordan, and M. Wainwright, "HopSkipJumpAttack: AQuery-Efficient Decision-Based Attack" 2020 IEEE Symposium on Security and Privacy (SP), pp. 1277-1294, May 2020.
|
22 |
H. Abdullah, W. Garcia, and C. Peeters et al., "Practical hidden voice attacks against speech and speaker recognition systems" NDSS 2019, pp. 1369-1378, 2019.
|
23 |
R. Taori, A. Kamsetty, and B. Chu et al.,"Targeted Adversarial Examples for Black Box Audio Systems" 2019 IEEE Security, and Privacy Workshops(SPW), pp. 15-20, May 2019.
|