Browse > Article
http://dx.doi.org/10.5351/KJAS.2021.34.2.279

Sufficient conditions for the oracle property in penalized linear regression  

Kwon, Sunghoon (Department of Applied Statistics, Konkuk University)
Moon, Hyeseong (Department of Applied Statistics, Konkuk University)
Chang, Jaeho (Department of Applied Statistics, Konkuk University)
Lee, Sangin (Department of Information and Statistics, Chungnam National University)
Publication Information
The Korean Journal of Applied Statistics / v.34, no.2, 2021 , pp. 279-293 More about this Journal
Abstract
In this paper, we introduce how to construct sufficient conditions for the oracle property in penalized linear regression model. We give formal definitions of the oracle estimator, penalized estimator, oracle penalized estimator, and the oracle property of the oracle estimator. Based on the definitions, we present a unified way of constructing optimality conditions for the oracle property and sufficient conditions for the optimality conditions that covers most of the existing penalties. In addition, we present an illustrative example and results from the numerical study.
Keywords
penalized estimator; oracle penalized estimator; oracle property;
Citations & Related Records
연도 인용수 순위
  • Reference
1 Wang H, Li R, and Tsai CL (2007). Tuning parameter selectors for the smoothly clipped absolute deviation method, Biometrika, 94, 553-568.   DOI
2 Yuan M and Lin Y (2006). Model selection and estimation in regression with grouped variables, Journal of the Royal Statistical Society: Series B (Statistical Methodology), 68, 49-67.   DOI
3 Yuan M and Lin Y (2007). Model selection and estimation in the gaussian graphical model, Biometrika, 94, 19-35.   DOI
4 Zhang CH (2010). Nearly unbiased variable selection under minimax concave penalty, The Annals of Statistics, 38, 894-942.   DOI
5 Zhao P and Yu B (2006). On model selection consistency of lasso, The Journal of Machine Learning Research, 7, 2541-2563.
6 Zou H (2006). The adaptive lasso and its oracle properties, Journal of the American Statistical Association, 101, 1418-1429.   DOI
7 Zou H, Hastie T, and Tibshirani R (2006). Sparse principal component analysis, Journal of Computational and Graphical Statistics, 15, 265-286.   DOI
8 Zou H and Li R (2008). One-step sparse estimates in nonconcave penalized likelihood models, Annals of Statistics, 36, 1509.   DOI
9 Choi H and Park C (2012). Approximate penalization path for smoothly clipped absolute deviation, Journal of Statistical Computation and Simulation, 82, 643-652.   DOI
10 Fan J and Li R (2001). Variable selection via nonconcave penalized likelihood and its oracle properties, Journal of the American Statistical Association, 96, 1348-1360.   DOI
11 Fan J and Lv J (2011). Nonconcave Penalized Likelihood with NP-Dimensionality, IEEE Transactions on information theory, 57, 5467-5484.   DOI
12 Fan J and Peng H (2004). Nonconcave penalized likelihood with a diverging number of parameters, The Annals of Statistics, 32, 928-961.   DOI
13 Hoerl AE and Kennard RW (1970). Ridge regression: biased estimation for nonorthogonal problems, Technometrics, 12, 55-67.   DOI
14 Huang J, Breheny P, Lee S, Ma S, and Zhang CH (2016). The mnet method for variable selection, Statistica Sinica, 903-923.
15 Huang J, Horowitz JL, and Ma S (2008). Asymptotic properties of bridge estimators in sparse high-dimensional regression models, The Annals of Statistics, 36, 587-613.   DOI
16 Kim Y, Choi H, and Oh HS (2008). Smoothly clipped absolute deviation on high dimensions, Journal of the American Statistical Association, 103, 1665-1673.   DOI
17 Kim Y, Jeon JJ, and Han S (2016). A necessary condition for the strong oracle property, Scandinavian Journal of Statistics, 43, 610-624.   DOI
18 Kim Y and Kwon S (2012). Global optimality of nonconvex penalized estimators, Biometrika, 99, 315-325.   DOI
19 Kwon S and Kim Y (2012). Large sample properties of the scad-penalized maximum likelihood estimation on high dimensions, Statistica Sinica, 629-653.
20 Kwon S, Ahn J, Jang W, Lee S, and Kim Y (2017). A doubly sparse approach for group variable selection, Annals of the Institute of Statistical Mathematics, 69, 997-1025.   DOI
21 Lee S and Kim S (2019). Marginalized lasso in sparse regression, Journal of the Korean Statistical Society, 48, 396-411.   DOI
22 Kwon S, Kim Y, and Choi H (2013). Sparse bridge estimation with a diverging number of parameters, Statistics and Its Interface, 6, 231-242.   DOI
23 Kwon S, Lee S, and Kim Y (2015). Moderately clipped lasso, Computational Statistics & Data Analysis, 92, 53-67.   DOI
24 Kwon S, Oh S, and Lee Y (2016). The use of random-effect models for high-dimensional variable selection problems, Computational Statistics & Data Analysis, 103, 401-412.   DOI
25 Lv J and Fan Y (2009). A unified approach to model selection and sparse recovery using regularized least squares, The Annals of Statistics, 37, 3498-3528.   DOI
26 Pan W, Shen X, and Liu B (2013). Cluster analysis: unsupervised learning via supervised learning with a non-convex penalty, The Journal of Machine Learning Research, 14, 1865-1889.
27 Tibshirani RJ and Taylor J (2011). The solution path of the generalized lasso, The Annals of Statistics, 39, 1335-1371.   DOI
28 Shen X and Huang HC (2010). Grouping pursuit through a regularization solution surface, Journal of the American Statistical Association, 105, 727-739.   DOI
29 Shen X, Pan W, and Zhu Y (2012). Likelihood-based selection and sharp parameter estimation, Journal of the American Statistical Association, 107, 223-232.   DOI
30 Tibshirani R (1996). Regression shrinkage and selection via the lasso, Journal of the Royal Statistical Society : Series B (Methodological), 58, 267-288.   DOI
31 Wang H, Li B, and Leng C (2009). Shrinkage tuning parameter selection with a diverging number of parameters, Journal of the Royal Statistical Society: Series B (Statistical Methodology), 71, 671-683.   DOI
32 Um S, Kim D, Lee S, and Kwon S (2020). On the strong oracle property of concave penalized estimators with infinite penalty derivative at the origin, Journal of the Korean Statistical Society, 49, 439-456.   DOI
33 Lee Y and Oh HS (2014). A new sparse variable selection via random-effect model, Journal of Multivariate Analysis, 125.
34 Shen X, Pan W, Zhu Y, and Zhou H (2013). On constrained and regularized high-dimensional regression, Annals of the Institute of Statistical Mathematics, 65, 807-832.   DOI