Browse > Article
http://dx.doi.org/10.14372/IEMEK.2021.16.1.1

Adversarial-Mixup: Increasing Robustness to Out-of-Distribution Data and Reliability of Inference  

Gwon, Kyungpil (Daegu University)
Yo, Joonhyuk (Daegu University)
Publication Information
Abstract
Detecting Out-of-Distribution (OOD) data is fundamentally required when Deep Neural Network (DNN) is applied to real-world AI such as autonomous driving. However, modern DNNs are quite vulnerable to the over-confidence problem even if the test data are far away from the trained data distribution. To solve the problem, this paper proposes a novel Adversarial-Mixup training method to let the DNN model be more robust by detecting OOD data effectively. Experimental results show that the proposed Adversarial-Mixup method improves the overall performance of OOD detection by 78% comparing with the State-of-the-Art methods. Furthermore, we show that the proposed method can alleviate the over-confidence problem by reducing the confidence score of OOD data than the previous methods, resulting in more reliable and robust DNNs.
Keywords
Out-of-distribution detection; Adversarial-mixup training; Deep neural networks;
Citations & Related Records
연도 인용수 순위
  • Reference
1 I.J. Goodfellow, J. Shlens, C. Szegedy, "Explaining and Harnessing Adversarial Examples," International Conference on Learning Representations(ICLR), 2015.
2 A. Nguyen, J. Yosinski, J. Clune, "Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images," Computer Vision and Pattern Recognition(CVPR), pp. 427-436, 2015.
3 H. Zhang, M. Cisse, Y.N. Dauphin, D. Lopez-Paz, "Mixup: Beyond Empirical Risk Minimization," International Conference on Learning Representations(ICLR), 2018.
4 K. Lee, H. Lee, K. Lee, J. Shin, "Training Confidence-Calibrated Classifiers for Detecting Out-of-Distribution Samples," International Conference on Learning Representations(ICLR), 2018.
5 D. Hendrycks, M. Mazeika, T. Dietterich, "Deep Anomaly Detection with Outlier Expousre," International Conference on Learning Representations(ICLR), 2019.
6 S. Hawkins, H. He, G. Williams, R. Baxter, "Outlier Detection Using Replicator Neural Networks," International Conference on Data Warehousing and Knowledge Discovery, Springer, pp. 170-180, 2002.
7 L. Ruff, R.A. Vandermeulen, N. Gornitz, L. Deecke, S.A. Siddiqui, A. Binder, E, Müller, M. Kloft, "Deep One-Class Classification," Proceedings of the 35th International Conference on Machine Learning(ICML), pp. 4393-4402, 2018.
8 J. Ren, P.J. Liu, E. Fertig, J. Snoek, R. Poplin, M. Depristo, J. Dillon, B. Lakshminarayanan, "Likelihood Ratios for Out-of-Distribution Detection," 33rd Conference on Neural Information Processing Systems(NeurlIPS), pp. 14707-14718, 2019.
9 D. Hendrycks, K. Gimpel, "A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks," International Conference on Learning Representations(ICLR), 2017.
10 S. Liang, Y. Li, R. Srikant, "Enhancing The Reliability of Out-of-Distribution Image Detection in Neural Networks," International Conference on Learning Representations(ICLR), 2018.
11 C. Guo, G. Pleiss, Y. Sun, K.Q. Weinberger, "On Calibration of Modern Neural Networks," Proceedings of the 34th International Conference on Machine Learning(ICML), 2017.
12 K. Gwon, J. Yoo, "Out-of-Distribution Data Detection Using Mahalanobis Distance for Reliable Deep Neural Networks," Proceedings of 2020 IEMEK Symposium on Embedded Technology(ISET 2020), 2020 (in Korean).
13 K. Lee, K. Lee, H. Lee, J. Shin, "A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks," 32nd Conference on Neural Information Processing Systems(NeurlIPS), pp. 7167-7177, 2018.
14 A. Laugros, A. Caplier, M. Ospici, "Addressing Neural Network Robustness with Mixup and Targete Labeling Adversarial Training," European Conference on Computer Vision(ECCV), 2020.
15 S. Thulasidasan, G. Chennupati, J. Bilmes, T. Bhattacharya, S. Michalak, "On Mixup Training: Improved Calibration and Predictive Uncertainty for Deep Neural Networks," 33rd Conference on Neural Information Processing Systems(NeurlIPS), 2019.
16 A. Kurakin, I.J. Goodfellow, S. Bengio, "Adversarial Examples in the Physical World," International Conference on Learning Representations(ICLR), 2017.
17 C. Szgedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I.J. Goodfellow, R. Fergus, "Intriguing Properties of Neural Networks," Computer Vision and Pattern Recognition(CVPR), 2014.