1 |
Krishnapuram, B., Carin, L., Figueiredo, M. A. T. and Hartemink, A. J. (2005). Sparse multi-nomial logistic regression: fast algorithms and generalization bounds. IEEE Ttransaction on Pattern Analysis and Machine Intelligence, 27, 957-968
DOI
ScienceOn
|
2 |
Rifkin, R. and Klautau, A. (2004). In defense of one-vs-all classification. Journal of Machine Learning Research, 5, 101-141
|
3 |
Minka, T. (2003). A comparison of numerical optimizers for logistic regression. Technical Report, Department of Statistics, Carnegie Mellon University
|
4 |
Lawrence, N. D., Seeger, M. and Herbrich, R. (2003). Fast sparse Gaussian process methods: the informative vector machine. Advances in Neural Information Processing Systems, 15, 609-616
|
5 |
Mercer, J. (1909). Functions of positive and negative type and their connection with the theory of integral equations. Philosophical Transactions of the Royal Society of London, 209, 415-446
DOI
|
6 |
Csato, L. and Opper, M. (2002). Sparse online Gaussian processes. Neural Computation, 14, 641-668
DOI
ScienceOn
|
7 |
Kimeldorf, G. S. and Wahba, G. (1971). Some results on Tchebycheffian spline functions. Journal of Mathematical Analysis and its Applications, 33, 82-95
DOI
|
8 |
Bohning, D. (1992). Multinomial logistic regression algorithm. Annals of the Institute of Statistical Mathematics, 44, 197-200
DOI
|
9 |
Tipping, M. (2001). Sparse Bayesian learning and the relevance vector machine. Journal of Machine Learning Research, 1, 211-244
DOI
|
10 |
Vapnik, V. N. (1995). The Nature of Statistical Learning Theory. Springer-Verlag, New York
|
11 |
Cawley, G. C., Talbot, N. L. C. and Girolami, M. (2006). Sparse multinomial logistic regression via Bayesian L1 regularisation. Advances in Neural Information Processing Systems, 18, 609-616
|