DOI QR코드

DOI QR Code

An Algorithm for Support Vector Machines with a Reject Option Using Bundle Method

  • Choi, Ho-Sik (Department of Informational Statistics and Institute of Basic Science, Hoseo University) ;
  • Kim, Yong-Dai (Department of Statistics, Seoul National University) ;
  • Han, Sang-Tae (Department of Informational Statistics and Institute of Basic Science, Hoseo University) ;
  • Kang, Hyun-Cheol (Department of Informational Statistics and Institute of Basic Science, Hoseo University)
  • 투고 : 20090900
  • 심사 : 20090900
  • 발행 : 2009.11.30

초록

A standard approach is to classify all of future observations. In some cases, however, it would be desirable to defer a decision in particular for observations which are hard to classify. That is, it would be better to take more advanced tests rather than to make a decision right away. This motivates a classifier with a reject option that reports a warning for those observations that are hard to classify. In this paper, we present the method which gives efficient computation with a reject option. Some numerical results show strong potential of the propose method.

키워드

참고문헌

  1. Bartlett, P. andWegkamp, M. H. (2008). Classification with a reject option using a hinge loss, Journal of Machine Learning Research, 9, 1823–1840
  2. Chow, C. K. (1970). On optimum recognition error and reject tradeoff, IEEE Transactions on Information Theory, 16, 41–46
  3. Cortes, C. and Vapnik, V. (1995). Support-vector networks, Machine Learning, 20, 273–297 https://doi.org/10.1007/BF00994018
  4. Hastie, T., Tibshirani, R. and Friedman, J. (2001). The Elements of Statistical Learning, First Edition, Springer-verlag, New York
  5. Herbei, R. and Wegkamp, M. H. (2006). Classification with reject option, The Canadian Journal of Statistics, 34, 709–721 https://doi.org/10.1002/cjs.5550340410
  6. Lendgrebe, C. W., Tax, M. J. and Duin, P. W. (2006). The interaction between classification and reject performance for distance-based reject-option lassifiers, Pattern Recognition Letters, 27, 908–917
  7. Schwarz, G. (1978). Estimating the dimension of a model. The Annals of Statistics, 6, 461–464
  8. Teo, C. H., Le, Q., Smola, A. and Vishwanathan, S. V. N. (2007). A scalable modular convex solver for regularized risk minimization, International Conference on Knowledge Discovery and Data Mining archive Proceedings of the 13th ACM SIGKDD international conference on Knowledge discovery and data mining, 727–736 https://doi.org/10.1145/1281192.1281270
  9. Teo, C. H., Vishwanathan, S. V. N., Smola, A. and Le, Q. (2009). Bundle methods for regularized risk minimization, Journal of Machine Learning Research, To appear
  10. Tibshirani, R. (1996). Regression shrinkage and selection via the lasso, Journal of the Royal Statistical Society: Series B, 58, 267–288
  11. Yukinawa, N., Oba, S., Kato, K., Taniguchi, K., Iwao-Koizumi, K., Tamaki, Y., Noguchi, S. and Ishii, S. (2006). A multi-class predictor based on a probabilistic model: Application to gene expression profiling-based diagnosis of thyroid tumors, BMC Bioinformatics, 7, 1471–2164
  12. Zhu, J., Rosset, S., Hastie, T. and Tibshirani, R. (2004). 1-norm support vector machines, In Thrun,S. et al. (eds). Advances in Neural Information Processing Systems, 16, MIT Press, Cambridge, MA