• Title/Summary/Keyword: Adaptive Lasso

Search Result 18, Processing Time 0.021 seconds

Adaptive lasso in sparse vector autoregressive models (Adaptive lasso를 이용한 희박벡터자기회귀모형에서의 변수 선택)

  • Lee, Sl Gi;Baek, Changryong
    • The Korean Journal of Applied Statistics
    • /
    • v.29 no.1
    • /
    • pp.27-39
    • /
    • 2016
  • This paper considers variable selection in the sparse vector autoregressive (sVAR) model where sparsity comes from setting small coefficients to exact zeros. In the estimation perspective, Davis et al. (2015) showed that the lasso type of regularization method is successful because it provides a simultaneous variable selection and parameter estimation even for time series data. However, their simulations study reports that the regular lasso overestimates the number of non-zero coefficients, hence its finite sample performance needs improvements. In this article, we show that the adaptive lasso significantly improves the performance where the adaptive lasso finds the sparsity patterns superior to the regular lasso. Some tuning parameter selections in the adaptive lasso are also discussed from the simulations study.

Model selection for unstable AR process via the adaptive LASSO (비정상 자기회귀모형에서의 벌점화 추정 기법에 대한 연구)

  • Na, Okyoung
    • The Korean Journal of Applied Statistics
    • /
    • v.32 no.6
    • /
    • pp.909-922
    • /
    • 2019
  • In this paper, we study the adaptive least absolute shrinkage and selection operator (LASSO) for the unstable autoregressive (AR) model. To identify the existence of the unit root, we apply the adaptive LASSO to the augmented Dickey-Fuller regression model, not the original AR model. We illustrate our method with simulations and a real data analysis. Simulation results show that the adaptive LASSO obtained by minimizing the Bayesian information criterion selects the order of the autoregressive model as well as the degree of differencing with high accuracy.

Determining the existence of unit roots based on detrended data (추세 제거된 시계열을 이용한 단위근 식별)

  • Na, Okyoung
    • The Korean Journal of Applied Statistics
    • /
    • v.34 no.2
    • /
    • pp.205-223
    • /
    • 2021
  • In this paper, we study a method to determine the existence of unit roots by using the adaptive lasso. The previously proposed method that applied the adaptive lasso to the original time series has low power when there is an unknown trend. Therefore, we propose a modified version that fits the ADF regression model without deterministic component using the adaptive lasso to the detrended series instead of the original series. Our Monte Carlo simulation experiments show that the modified method improves the power over the original method and works well in large samples.

Discrimination between trend and difference stationary processes based on adaptive lasso (Adaptive lasso를 이용하여 추세-정상시계열과 차분-정상시계열을 판별하는 방법에 대한 연구)

  • Na, Okyoung
    • The Korean Journal of Applied Statistics
    • /
    • v.33 no.6
    • /
    • pp.723-738
    • /
    • 2020
  • In this paper, we study a method to discriminate between trend stationary and difference stationary processes. Since a crucial ingredient of this discrimination is to determine the existence of unit root, we can use a unit root testing strategy. So, we introduce a discrimination based on unit root testing and propose the method using the adaptive lasso. Our Monte Carlo simulation experiments show that the adaptive lasso improves the discrimination accuracy when the process is trend stationary, but has lower accuracy than unit root strategy where the process is difference stationary.

Comparison of Lasso Type Estimators for High-Dimensional Data

  • Kim, Jaehee
    • Communications for Statistical Applications and Methods
    • /
    • v.21 no.4
    • /
    • pp.349-361
    • /
    • 2014
  • This paper compares of lasso type estimators in various high-dimensional data situations with sparse parameters. Lasso, adaptive lasso, fused lasso and elastic net as lasso type estimators and ridge estimator are compared via simulation in linear models with correlated and uncorrelated covariates and binary regression models with correlated covariates and discrete covariates. Each method is shown to have advantages with different penalty conditions according to sparsity patterns of regression parameters. We applied the lasso type methods to Arabidopsis microarray gene expression data to find the strongly significant genes to distinguish two groups.

Robust estimation of sparse vector autoregressive models (희박 벡터 자기 회귀 모형의 로버스트 추정)

  • Kim, Dongyeong;Baek, Changryong
    • The Korean Journal of Applied Statistics
    • /
    • v.35 no.5
    • /
    • pp.631-644
    • /
    • 2022
  • This paper considers robust estimation of the sparse vector autoregressive model (sVAR) useful in high-dimensional time series analysis. First, we generalize the result of Xu et al. (2008) that the adaptive lasso indeed has robustness in sVAR as well. However, adaptive lasso method in sVAR performs poorly as the number and sizes of outliers increases. Therefore, we propose new robust estimation methods for sVAR based on least absolute deviation (LAD) and Huber estimation. Our simulation results show that our proposed methods provide more accurate estimation in turn showed better forecasting performance when outliers exist. In addition, we applied our proposed methods to power usage data and confirmed that there are unignorable outliers and robust estimation taking such outliers into account improves forecasting.

Sparse vector heterogeneous autoregressive model with nonconvex penalties

  • Shin, Andrew Jaeho;Park, Minsu;Baek, Changryong
    • Communications for Statistical Applications and Methods
    • /
    • v.29 no.1
    • /
    • pp.53-64
    • /
    • 2022
  • High dimensional time series is gaining considerable attention in recent years. The sparse vector heterogeneous autoregressive (VHAR) model proposed by Baek and Park (2020) uses adaptive lasso and debiasing procedure in estimation, and showed superb forecasting performance in realized volatilities. This paper extends the sparse VHAR model by considering non-convex penalties such as SCAD and MCP for possible bias reduction from their penalty design. Finite sample performances of three estimation methods are compared through Monte Carlo simulation. Our study shows first that taking into cross-sectional correlations reduces bias. Second, nonconvex penalties performs better when the sample size is small. On the other hand, the adaptive lasso with debiasing performs well as sample size increases. Also, empirical analysis based on 20 multinational realized volatilities is provided.

A convenient approach for penalty parameter selection in robust lasso regression

  • Kim, Jongyoung;Lee, Seokho
    • Communications for Statistical Applications and Methods
    • /
    • v.24 no.6
    • /
    • pp.651-662
    • /
    • 2017
  • We propose an alternative procedure to select penalty parameter in $L_1$ penalized robust regression. This procedure is based on marginalization of prior distribution over the penalty parameter. Thus, resulting objective function does not include the penalty parameter due to marginalizing it out. In addition, its estimating algorithm automatically chooses a penalty parameter using the previous estimate of regression coefficients. The proposed approach bypasses cross validation as well as saves computing time. Variable-wise penalization also performs best in prediction and variable selection perspectives. Numerical studies using simulation data demonstrate the performance of our proposals. The proposed methods are applied to Boston housing data. Through simulation study and real data application we demonstrate that our proposals are competitive to or much better than cross-validation in prediction, variable selection, and computing time perspectives.

Penalized variable selection for accelerated failure time models

  • Park, Eunyoung;Ha, Il Do
    • Communications for Statistical Applications and Methods
    • /
    • v.25 no.6
    • /
    • pp.591-604
    • /
    • 2018
  • The accelerated failure time (AFT) model is a linear model under the log-transformation of survival time that has been introduced as a useful alternative to the proportional hazards (PH) model. In this paper we propose variable-selection procedures of fixed effects in a parametric AFT model using penalized likelihood approaches. We use three popular penalty functions, least absolute shrinkage and selection operator (LASSO), adaptive LASSO and smoothly clipped absolute deviation (SCAD). With these procedures we can select important variables and estimate the fixed effects at the same time. The performance of the proposed method is evaluated using simulation studies, including the investigation of impact of misspecifying the assumed distribution. The proposed method is illustrated with a primary biliary cirrhosis (PBC) data set.

Penalized least distance estimator in the multivariate regression model (다변량 선형회귀모형의 벌점화 최소거리추정에 관한 연구)

  • Jungmin Shin;Jongkyeong Kang;Sungwan Bang
    • The Korean Journal of Applied Statistics
    • /
    • v.37 no.1
    • /
    • pp.1-12
    • /
    • 2024
  • In many real-world data, multiple response variables are often dependent on the same set of explanatory variables. In particular, if several response variables are correlated with each other, simultaneous estimation considering the correlation between response variables might be more effective way than individual analysis by each response variable. In this multivariate regression analysis, least distance estimator (LDE) can estimate the regression coefficients simultaneously to minimize the distance between each training data and the estimates in a multidimensional Euclidean space. It provides a robustness for the outliers as well. In this paper, we examine the least distance estimation method in multivariate linear regression analysis, and furthermore, we present the penalized least distance estimator (PLDE) for efficient variable selection. The LDE technique applied with the adaptive group LASSO penalty term (AGLDE) is proposed in this study which can reflect the correlation between response variables in the model and can efficiently select variables according to the importance of explanatory variables. The validity of the proposed method was confirmed through simulations and real data analysis.