Browse > Article
http://dx.doi.org/10.5351/CSAM.2013.20.3.225

A Note on Linear SVM in Gaussian Classes  

Jeon, Yongho (Department of Applied Statistics, Yonsei University)
Publication Information
Communications for Statistical Applications and Methods / v.20, no.3, 2013 , pp. 225-233 More about this Journal
Abstract
The linear support vector machine(SVM) is motivated by the maximal margin separating hyperplane and is a popular tool for binary classification tasks. Many studies exist on the consistency properties of SVM; however, it is unknown whether the linear SVM is consistent for estimating the optimal classification boundary even in the simple case of two Gaussian classes with a common covariance, where the optimal classification boundary is linear. In this paper we show that the linear SVM can be inconsistent in the univariate Gaussian classification problem with a common variance, even when the best tuning parameter is used.
Keywords
Consistency for classification; Fisher consistency; Gaussian linear discriminant analysis; support vector machines;
Citations & Related Records
연도 인용수 순위
  • Reference
1 Koo, J., Lee, Y., Kim, Y. and Park, C. (2008). A bahadur representation of the linear support vector machine, The Journal of Machine Learning Research, 9, 1343-1368.
2 Li, K.-C. and Duan, N. (1989). Regression analysis under link violation, The Annals of Statistics, 17, 1009-1052.   DOI
3 Lin, Y. (2000). Some asymptotic properties of the support vector machine, Technical Report 1029, Department of Statistics, University of Wisconsin-Madison.
4 Lin, Y. (2004). A note on margin-based loss functions in classification, Statistics & Probability Letters, 68, 73-82.   DOI   ScienceOn
5 Scholkopf, B. and Smola, A. (2001). Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond, MIT press.
6 Steinwart, I. (2005). Consistency of support vector machines and other regularized kernel classifiers, IEEE Transactions on Information Theory, 51, 128-142.   DOI   ScienceOn
7 Steinwart, I. and Scovel, C. (2007). Fast rates for support vector machines using Gaussian kernels, The Annals of Statistics, 35, 575-607.   DOI
8 Vapnik, V. (1999). The Nature of Statistical Learning Theory, Springer.
9 Xu, H., Caramanis, C. and Mannor, S. (2009). Robustness and regularization of support vector machines, The Journal of Machine Learning Research, 10, 1485-1510.
10 Zhang, T. (2004). Statistical behavior and consistency of classification methods based on convex risk minimization, Annals of Statistics, 32, 56-85.
11 Cortes, C. and Vapnik, V. (1995). Support-vector networks, Machine Learning, 20, 273-297.
12 Bartlett, P., Jordan, M. and McAuliffe, J. (2006). Convexity, classification, and risk bounds, Journal of the American Statistical Association, 101, 138-156.   DOI   ScienceOn
13 Blanchard, G., Bousquet, O. and Massart, P. (2008). Statistical performance of support vector machines, The Annals of Statistics, 36, 489-531.   DOI
14 Boser, B., Guyon, I. and Vapnik, V. (1992). A training algorithm for optimal margin classifiers, In Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144-152, ACM.
15 Cristianini, N. and Shawe-Taylor, J. (2000). An Introduction to Support Vector Machines and Other Kernel-Based Learning Methods, Cambridge University press.