Browse > Article
http://dx.doi.org/10.7465/jkdi.2012.23.2.385

Semiparametric kernel logistic regression with longitudinal data  

Shim, Joo-Yong (Department of Data Science, Inje University)
Seok, Kyung-Ha (Department of Data Science, Inje University)
Publication Information
Journal of the Korean Data and Information Science Society / v.23, no.2, 2012 , pp. 385-392 More about this Journal
Abstract
Logistic regression is a well known binary classification method in the field of statistical learning. Mixed-effect regression models are widely used for the analysis of correlated data such as those found in longitudinal studies. We consider kernel extensions with semiparametric fixed effects and parametric random effects for the logistic regression. The estimation is performed through the penalized likelihood method based on kernel trick, and our focus is on the efficient computation and the effective hyperparameter selection. For the selection of optimal hyperparameters, cross-validation techniques are employed. Numerical results are then presented to indicate the performance of the proposed procedure.
Keywords
Generalized cross-validation function; kernel trick; logistic regression; longitudinal data; mixed-effects model; penalized likelihood;
Citations & Related Records
Times Cited By KSCI : 3  (Citation Analysis)
연도 인용수 순위
1 Agresti, A. (2002). Categorical data analysis, Wiley-Interscience, New York.
2 Amemiya, T. (1985). Advanced econometrics, Harvard University Press, Boston.
3 Cho, D. H., Shim, J. and Seok, K. H. (2010). Doubly penalized kernel method for heteroscedastic autoregressive data. Journal of the Korean Data & Information Science Society, 21, 155-162.
4 Craven, P. andWahba, G. (1979). Smoothing noisy data with spline functions: estimating the correct degree of smoothing by the method of generalized cross validation. Numerical Mathematics., 31, 377-403.
5 Hedeker, D. and Gibbons, R. D. (2006). Longitudinal data analysis, John Wiley and Sons.
6 Hwang, C. (2010). Kernel method for autoregressive data. Journal of the Korean Data & Information Science Society, 20, 467-472.
7 Hwang, C. (2011). Asymmetric least squares regression estimation using weighted least squares support vector machine. Journal of the Korean Data & Information Science Society, 22, 999-1005.
8 Joe, H. (1997). Multivariate models and dependence concepts, Chapman and Hall, London.
9 Kimeldorf, G. S. and Wahba, G. (1971) Some results on Tchebychean spline functions. Journal of Mathematical Analysis and its Applications, 33, 82-95.   DOI
10 Mercer, J. (1909). Functions of positive and negative type and their connection with theory of integral equations. Philosophical Transactions of Royal Society A, 415-446.
11 Pi, S. Y., Park, H. J. and Rhu, K. H. (2011). An analysis of satisfaction index on computer education. Journal of the Korean Data & Information Science Society, 22, 921-929.
12 Winkelmann, R. (2003). Econometric analysis of count data, Springer Verlag, New York.
13 Shim, J. and Seok, K. H. (2008). Kernel Poisson regression for longitudinal data. Journal of the Korean Data & Information Science Society, 19, 1353-1360.
14 Smola, A. and Scholkopf, B. (1998). On a kernel-based method for pattern recognition, regression, approximation and operator Inversion. Algorith   DOI
15 Vapnik, V. N. (1995). The nature of statistical learning theory, Springer, New York.
16 Wu, H. and Zhang, J. (2006). Nonparametric regression methods for longitudinal data analysis, Wiley, New York.
17 Xiang, D. and Wahba, G. (1996). A generalized approximate cross validation for smoothing splines with non-Gaussian data. Statistica Sinica, 6, 675-692.