Browse > Article
http://dx.doi.org/10.30693/SMJ.2020.9.3.80

Removing Out - Of - Distribution Samples on Classification Task  

Dang, Thanh-Vu (Department of ICT Convergence System Engineering, Chonnam National University)
Vo, Hoang-Trong (Department of ICT Convergence System Engineering, Chonnam National University)
Yu, Gwang-Hyun (Department of ICT Convergence System Engineering, Chonnam National University)
Lee, Ju-Hwan (Department of ICT Convergence System Engineering, Chonnam National University)
Nguyen, Huy-Toan (Department of ICT Convergence System Engineering, Chonnam National University)
Kim, Jin-Young (Department of ICT Convergence System Engineering, Chonnam National University)
Publication Information
Smart Media Journal / v.9, no.3, 2020 , pp. 80-89 More about this Journal
Abstract
Out - of - distribution (OOD) samples are frequently encountered when deploying a classification model in plenty of real-world machine learning-based applications. Those samples are normally sampling far away from the training distribution, but many classifiers still assign them high reliability to belong to one of the training categories. In this study, we address the problem of removing OOD examples by estimating marginal density estimation using variational autoencoder (VAE). We also investigate other proper methods, such as temperature scaling, Gaussian discrimination analysis, and label smoothing. We use Chonnam National University (CNU) weeds dataset as the in - distribution dataset and CIFAR-10, CalTeach as the OOD datasets. Quantitative results show that the proposed framework can reject the OOD test samples with a suitable threshold.
Keywords
Out of distribution detection; Classification; Neural Networks; Statistical Modeling;
Citations & Related Records
Times Cited By KSCI : 4  (Citation Analysis)
연도 인용수 순위
1 N. Dalal and B. Triggs, "Histograms of oriente d gradients for human detection, 2005 IEEE computer society conference on computer vision and pattern recognition, 2005
2 H. Bay, T. Tuytelaars and L. V. Gool, " Speeded up robust features," European conference on computer vision, Berlin, 2006
3 T. Ahonen, A. Hadid and M. Pietikainen, "Face description with local binary patterns: Appli cation to face recognition," EEE transactions on pattern analysis and machine intelligence, vol. 28, no. 12, pp. 2037-2041, 2006   DOI
4 K. Alex, I. Sutskever and G. E. Hinton, "Imagenet classification with deep convolutional neural networks," Advances in neural i nformation processing systems, pp. 1097-1105, 2012
5 O. Russakovsky, J. Deng, S. Hao , J. Krause, . S. Satheesh, S. Ma, Z. Huang, A. Karpathy and L. Fei Fei, "Imagenet large scale visual recognition challenge," International journal of computer vision, vol. 115, no. 3, pp. 211-252, 2015   DOI
6 N. Hoang, G. Lee, S. Kim, H. Yang, "Effective Hand Gesture Recognition by Key Frame Selection and 3D Neural Network," Smart Media Journal, vol. 9, no. 1, pp. 23-29, 2020   DOI
7 A. Nguyen, J . Yosinski and J. Clune, "Deep Neural Networks are Easily Fooled: High Confidence Predicti ons for Unrecognizable Images," Computer Vision and Pattern Recognition, 2015
8 C. Szegedy, . V. Vanhoucke, S. Ioffe, J. Shlens and Z. Wojna, "Rethinking the inception architec ture for computer vision," Proceedings of the IEEE conference on computer vision and pattern recognition, 2016
9 K. He, X. Zhang, S. Ren and J. Sun, "Deep residual lea rning for image recognition," Proceedings of the IEEE conference on computer vision and pattern recognition, 2016
10 A. Nguyen, J. Yosinski and J. Clune, "Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images," Computer Vision and Pattern Recognition, 2015
11 D. Hendrycks and K. Gimpel, "A baseline for detecting misclassified and out of distributio n examples in neural networks," International Conference on Learning Representations, 2017
12 K. Lee, K. Lee, H. Lee and J. Shin, "A simple unified framework for detecting out of distribution samples and adversarial attacks," Advances in Neu ral Information Processing Systems, pp. 7167-7177, 2018
13 J. Ren, P. J. Liu, E. Fertig, J. Snoek, R. Poplin, M. A. DePristo, J. V. Dillon and B. Lakshminarayanan, "Likelihood ratios for out of distribution detection," Advances in Neural Information Processing Systems, pp. 14680-14691, 2019
14 M. Germain, K. Gregor, I. Murray and H. L. Larochelle, "MADE: Masked Autoencoder f or Distribution Estimation," International Conference on Machine Learning, 2015
15 K. Lee, H. Lee, K. Lee and J. Shin, "Training confidence calibrated classifiers for detecting o ut of distribution samples," International Conference on Learning Representations, 2018
16 C. Guo, G. Pleiss, Y. Sun and K. Q. Weinberger, "On calibration of modern neural networks Proceedings of the 34th International Conference on Machine Learning, 2017
17 D. P. Kingma and M. Welling, "Auto encoding variational bayes," arXiv preprint arXiv:1312.6114, 2013
18 T. H. Vo, H. G. Yu, V. T. Dang and Y. J. Kim, "Late fusion of multimodal deep neural networks for weeds classification," Computers and Electronics in Agriculture, vol. 175, pp. 105506, 2020   DOI
19 T. H. Vo , G. H. Yu, H. Nguyen, J. H. Lee, T. V. Dang, J. Kim,"Analyze weeds classification with visual explanation based on Convolutional Neural Networks," Smart Media Journal, vol. 8, no. 3, pp. 31-40, 2019   DOI