Browse > Article
http://dx.doi.org/10.29220/CSAM.2018.25.5.453

Non-convex penalized estimation for the AR process  

Na, Okyoung (Department of Applied Statistics, Kyonggi University)
Kwon, Sunghoon (Department of Applied Statistics, Konkuk University)
Publication Information
Communications for Statistical Applications and Methods / v.25, no.5, 2018 , pp. 453-470 More about this Journal
Abstract
We study how to distinguish the parameters of the sparse autoregressive (AR) process from zero using a non-convex penalized estimation. A class of non-convex penalties are considered that include the smoothly clipped absolute deviation and minimax concave penalties as special examples. We prove that the penalized estimators achieve some standard theoretical properties such as weak and strong oracle properties which have been proved in sparse linear regression framework. The results hold when the maximal order of the AR process increases to infinity and the minimal size of true non-zero parameters decreases toward zero as the sample size increases. Further, we construct a practical method to select tuning parameters using generalized information criterion, of which the minimizer asymptotically recovers the best theoretical non-penalized estimator of the sparse AR process. Simulation studies are given to confirm the theoretical results.
Keywords
autoregressive process; subset selection; non-convex penalty; oracle property; tuning parameter selection;
Citations & Related Records
Times Cited By KSCI : 2  (Citation Analysis)
연도 인용수 순위
1 Hannan EJ and Quinn BG (1979). The determination of the order of an autoregression, Journal of Royal Statistical Society, 41, 190-195.
2 Huang J, Horowitz JL, and Ma S (2008). Asymptotic properties of bridge estimators in sparse high-dimensional regression models, The Annals of Statistics, 36, 587-613.   DOI
3 Kim Y, Choi H, and Oh H (2008). Smoothly clipped absolute deviation on high dimensions, Journal of the American Statistical Association, 103, 1656-1673.
4 Kim Y, Jeon JJ, and Han S (2016). A necessary condition for the strong oracle property, Scandinavian Journal of Statistics, 43, 610-624.   DOI
5 Kim Y and Kwon S (2012). Global optimality of nonconvex penalized estimators, Biometrika, 99, 315-325.   DOI
6 Kim Y, Kwon S, and Choi H (2012). Consistent model selection criteria on high dimensions, Journal of Machine Learning Research, 13, 1037-1057.
7 Kwon S and Kim Y (2012). Large sample properties of the SCAD-penalized maximum likelihood estimation on high dimensions, Statistica Sinica, 22, 629-653.
8 Kwon S, Lee S, and Na O (2017). Tuning parameter selection for the adaptive lasso in the autoregressive model, Journal of the Korean Statistical Society, 46, 285-297.   DOI
9 Kwon S, Oh S, and Lee Y (2016). The use of random-effect models for high-dimensional variable selection problems, Computational Statistics & Data Analysis, 103, 401-412.   DOI
10 Lee S, Kwon S, and Kim Y (2016). A modified local quadratic approximation algorithm for penalized optimization problems, Computational Statistics & Data Analysis, 94, 275-286.   DOI
11 McClave J (1975). Subset autoregression, Technometrics, 17, 213-220.   DOI
12 Schmidt DF and Makalic E (2013). Estimation of stationary autoregressive models with the Bayesian LASSO, Journal of Time Series Analysis, 34, 517-531.   DOI
13 McLeod AI and Zhang Y (2006). Partial autocorrelation parametrization for subset autoregression, Journal of Time Series Analysis, 27, 599-612.   DOI
14 Na O (2017). Generalized information criterion for the ar model, Journal of the Korean Statistical Society, 46, 146-160.   DOI
15 Nardi Y and Rinaldo A (2011). Autoregressive process modeling via the lasso procedure, Journal of Multivariate Analysis, 102, 528-549.   DOI
16 Sang H and Sun Y (2015). Simultaneous sparse model selection and coefficient estimation for heavy-tailed autoregressive processes, Statistics, 49, 187-208.   DOI
17 Sarkar A and Kanjilal PP (1995). On a method of identification of best subset model from full ar-model, Communications in Statistics-Theory and Methods, 24, 1551-1567.   DOI
18 Schwarz G (1978). Estimating the dimension of a model, The Annals of Statistics, 6, 461-464.   DOI
19 Shen X, Pan W, Zhu Y, and Zhou H (2013). On constrained and regularized high-dimensional regression, Annals of the Institute of Statistical Mathematics, 1, 1-26.
20 Shibata R (1976). Selection of the order of an autoregressive model by Akaike's information criterion, Biometrika, 63, 117-126.   DOI
21 Tibshirani RJ (1996). Regression shrinkage and selection via the LASSO, Journal of the Royal Statistical Society, Series B, 58, 267-288.
22 Tsay RS (1984). Order selection in nonstationary autoregressive models, The Annals of Statistics, 12, 1425-1433.   DOI
23 Zhang CH (2010a). Nearly unbiased variable selection under minimax concave penalty, The Annals of Statistics, 38, 894-942.   DOI
24 Wang H, Li B, and Leng C (2009). Shrinkage tuning parameter selection with a diverging number of parameters, Journal of Royal Statistical Society, Series B, 71, 671-683.   DOI
25 Wang H, Li R, and Tsai C (2007). Tuning parameter selectors for the smoothly clipped absolute deviation method, Biometrika, 94, 553-568.   DOI
26 Wu WB (2005). Nonlinear system theory: another look at dependence, Proceedings of the National Academy of Sciences of the United States of America, 102, 14150-14154.   DOI
27 Wu WB (2011). Asymptotic theory for stationary processes, Statistics and Its Interface, 4, 207-226.   DOI
28 Ye F and Zhang CH (2010). Rate Minimaxity of the Lasso and Dantzig selector for the lq loss in lr balls, Journal of Machine Learning Research, 11, 3519-3540.
29 Zhang CH and Zhang T (2012). A general theory of concave regularization for high-dimensional sparse estimation problems, Statistical Science, 27, 576-593.   DOI
30 Zhang T (2010b). Analysis of multi-stage convex relaxation for sparse regularization, Journal of Machine Learning Research, 11, 1081-1107.
31 Zou H (2006). The adaptive lasso and its oracle properties, Journal of the American Statistical Association, 101, 1418-1429.   DOI
32 Zou H and Li R (2008). One-step sparse estimates in nonconcave penalized likelihood models, The Annals of Statistics, 36, 1509-1533.   DOI
33 Brockwell PJ and Davis RA (2006). Time Series: Theory and Methods (2nd ed), Springer, New York.
34 Akaike H (1973). Information theory and an extension of the maximum likelihood principle. In Proceeding 2nd International Symposium on Information Theory, 267-281.
35 Akaike H (1979). A Bayesian extension of the minimum AIC procedure of autoregressive model fitting, Biometrika, 66, 237-242.   DOI
36 Bollerslev T (1986). Generalized autoregressive conditional heteroskedasticity, Journal of Econometrics, 31, 307-327.   DOI
37 Chen C (1999). Subset selection of autoregressive time series models, Journal of Forecasting, 18 505-516.   DOI
38 Claeskens G, Croux C, and Van Kerckhoven J (2007). Prediction focused model selection for autoregressive models, The Australian and New Zealand Journal of Statistics, 49, 359-379.   DOI
39 Fan J and Li R (2001). Variable selection via nonconcave penalized likelihood and its oracle properties, Journal of the American Statistical Association, 96, 1348-1360.   DOI
40 Claeskens G and Hjort NL (2003). The focussed information criterion, Journal of the American Statistical Association, 98, 900-916.   DOI
41 Akaike H (1969). Fitting autoregressive models for prediction, Annals of the Institute of Statistical Mathematics, 21, 243-247.   DOI
42 Fan J and Peng H (2004). Nonconcave penalized likelihood with a diverging number of parameters, The Annals of Statistics, 32, 928-961.   DOI
43 Friedman J, Hastie T, Hofling H, and Tibshirani R (2007). Pathwise coordinate optimization, The Annals of Applied Statistics, 1, 302-332.   DOI
44 Hannan EJ (1980). The estimation of the order of an ARMA process, The Annals of Statistics, 8, 1071-1081.   DOI