• Title/Summary/Keyword: adaptive lasso 추정

Search Result 10, Processing Time 0.017 seconds

Model selection for unstable AR process via the adaptive LASSO (비정상 자기회귀모형에서의 벌점화 추정 기법에 대한 연구)

  • Na, Okyoung
    • The Korean Journal of Applied Statistics
    • /
    • v.32 no.6
    • /
    • pp.909-922
    • /
    • 2019
  • In this paper, we study the adaptive least absolute shrinkage and selection operator (LASSO) for the unstable autoregressive (AR) model. To identify the existence of the unit root, we apply the adaptive LASSO to the augmented Dickey-Fuller regression model, not the original AR model. We illustrate our method with simulations and a real data analysis. Simulation results show that the adaptive LASSO obtained by minimizing the Bayesian information criterion selects the order of the autoregressive model as well as the degree of differencing with high accuracy.

Adaptive lasso in sparse vector autoregressive models (Adaptive lasso를 이용한 희박벡터자기회귀모형에서의 변수 선택)

  • Lee, Sl Gi;Baek, Changryong
    • The Korean Journal of Applied Statistics
    • /
    • v.29 no.1
    • /
    • pp.27-39
    • /
    • 2016
  • This paper considers variable selection in the sparse vector autoregressive (sVAR) model where sparsity comes from setting small coefficients to exact zeros. In the estimation perspective, Davis et al. (2015) showed that the lasso type of regularization method is successful because it provides a simultaneous variable selection and parameter estimation even for time series data. However, their simulations study reports that the regular lasso overestimates the number of non-zero coefficients, hence its finite sample performance needs improvements. In this article, we show that the adaptive lasso significantly improves the performance where the adaptive lasso finds the sparsity patterns superior to the regular lasso. Some tuning parameter selections in the adaptive lasso are also discussed from the simulations study.

Discrimination between trend and difference stationary processes based on adaptive lasso (Adaptive lasso를 이용하여 추세-정상시계열과 차분-정상시계열을 판별하는 방법에 대한 연구)

  • Na, Okyoung
    • The Korean Journal of Applied Statistics
    • /
    • v.33 no.6
    • /
    • pp.723-738
    • /
    • 2020
  • In this paper, we study a method to discriminate between trend stationary and difference stationary processes. Since a crucial ingredient of this discrimination is to determine the existence of unit root, we can use a unit root testing strategy. So, we introduce a discrimination based on unit root testing and propose the method using the adaptive lasso. Our Monte Carlo simulation experiments show that the adaptive lasso improves the discrimination accuracy when the process is trend stationary, but has lower accuracy than unit root strategy where the process is difference stationary.

Determining the existence of unit roots based on detrended data (추세 제거된 시계열을 이용한 단위근 식별)

  • Na, Okyoung
    • The Korean Journal of Applied Statistics
    • /
    • v.34 no.2
    • /
    • pp.205-223
    • /
    • 2021
  • In this paper, we study a method to determine the existence of unit roots by using the adaptive lasso. The previously proposed method that applied the adaptive lasso to the original time series has low power when there is an unknown trend. Therefore, we propose a modified version that fits the ADF regression model without deterministic component using the adaptive lasso to the detrended series instead of the original series. Our Monte Carlo simulation experiments show that the modified method improves the power over the original method and works well in large samples.

Robust estimation of sparse vector autoregressive models (희박 벡터 자기 회귀 모형의 로버스트 추정)

  • Kim, Dongyeong;Baek, Changryong
    • The Korean Journal of Applied Statistics
    • /
    • v.35 no.5
    • /
    • pp.631-644
    • /
    • 2022
  • This paper considers robust estimation of the sparse vector autoregressive model (sVAR) useful in high-dimensional time series analysis. First, we generalize the result of Xu et al. (2008) that the adaptive lasso indeed has robustness in sVAR as well. However, adaptive lasso method in sVAR performs poorly as the number and sizes of outliers increases. Therefore, we propose new robust estimation methods for sVAR based on least absolute deviation (LAD) and Huber estimation. Our simulation results show that our proposed methods provide more accurate estimation in turn showed better forecasting performance when outliers exist. In addition, we applied our proposed methods to power usage data and confirmed that there are unignorable outliers and robust estimation taking such outliers into account improves forecasting.

Penalized least distance estimator in the multivariate regression model (다변량 선형회귀모형의 벌점화 최소거리추정에 관한 연구)

  • Jungmin Shin;Jongkyeong Kang;Sungwan Bang
    • The Korean Journal of Applied Statistics
    • /
    • v.37 no.1
    • /
    • pp.1-12
    • /
    • 2024
  • In many real-world data, multiple response variables are often dependent on the same set of explanatory variables. In particular, if several response variables are correlated with each other, simultaneous estimation considering the correlation between response variables might be more effective way than individual analysis by each response variable. In this multivariate regression analysis, least distance estimator (LDE) can estimate the regression coefficients simultaneously to minimize the distance between each training data and the estimates in a multidimensional Euclidean space. It provides a robustness for the outliers as well. In this paper, we examine the least distance estimation method in multivariate linear regression analysis, and furthermore, we present the penalized least distance estimator (PLDE) for efficient variable selection. The LDE technique applied with the adaptive group LASSO penalty term (AGLDE) is proposed in this study which can reflect the correlation between response variables in the model and can efficiently select variables according to the importance of explanatory variables. The validity of the proposed method was confirmed through simulations and real data analysis.

Controlling the false discovery rate in sparse VHAR models using knockoffs (KNOCKOFF를 이용한 성근 VHAR 모형의 FDR 제어)

  • Minsu, Park;Jaewon, Lee;Changryong, Baek
    • The Korean Journal of Applied Statistics
    • /
    • v.35 no.6
    • /
    • pp.685-701
    • /
    • 2022
  • FDR is widely used in high-dimensional data inference since it provides more liberal criterion contrary to FWER which is known to be very conservative by controlling Type-1 errors. This paper proposes a sparse VHAR model estimation method controlling FDR by adapting the knockoff introduced by Barber and Candès (2015). We also compare knockoff with conventional method using adaptive Lasso (AL) through extensive simulation study. We observe that AL shows sparsistency and decent forecasting performance, however, AL is not satisfactory in controlling FDR. To be more specific, AL tends to estimate zero coefficients as non-zero coefficients. On the other hand, knockoff controls FDR sufficiently well under desired level, but it finds too sparse model when the sample size is small. However, the knockoff is dramatically improved as sample size increases and the model is getting sparser.

Penalized variable selection in mean-variance accelerated failure time models (평균-분산 가속화 실패시간 모형에서 벌점화 변수선택)

  • Kwon, Ji Hoon;Ha, Il Do
    • The Korean Journal of Applied Statistics
    • /
    • v.34 no.3
    • /
    • pp.411-425
    • /
    • 2021
  • Accelerated failure time (AFT) model represents a linear relationship between the log-survival time and covariates. We are interested in the inference of covariate's effect affecting the variation of survival times in the AFT model. Thus, we need to model the variance as well as the mean of survival times. We call the resulting model mean and variance AFT (MV-AFT) model. In this paper, we propose a variable selection procedure of regression parameters of mean and variance in MV-AFT model using penalized likelihood function. For the variable selection, we study four penalty functions, i.e. least absolute shrinkage and selection operator (LASSO), adaptive lasso (ALASSO), smoothly clipped absolute deviation (SCAD) and hierarchical likelihood (HL). With this procedure we can select important covariates and estimate the regression parameters at the same time. The performance of the proposed method is evaluated using simulation studies. The proposed method is illustrated with a clinical example dataset.

Adaptive Residual DPCM using Weighted Linear Combination of Adjacent Residues in Screen Content Video Coding (스크린 콘텐츠 비디오의 압축을 위한 인접 화소의 가중 합을 이용한 적응적 Residual DPCM 기법)

  • Kang, Je-Won
    • Journal of Broadcast Engineering
    • /
    • v.20 no.5
    • /
    • pp.782-785
    • /
    • 2015
  • In this paper, we propose a novel residual differential pulse-code modulation (RDPCM) coding technique to improve coding efficiency of screen content videos. The proposed method uses a weighted combination of adjacent residues to provide an accurate estimate in RDPCM. The weights are trained in previously coded samples by using an L1 optimization problem with the least absolute shrinkage and selection operation (LASSO). The proposed method achieves BD-rate saving about 3.1% in all-intra coding.

Joint penalization of components and predictors in mixture of regressions (혼합회귀모형에서 콤포넌트 및 설명변수에 대한 벌점함수의 적용)

  • Park, Chongsun;Mo, Eun Bi
    • The Korean Journal of Applied Statistics
    • /
    • v.32 no.2
    • /
    • pp.199-211
    • /
    • 2019
  • This paper is concerned with issues in the finite mixture of regression modeling as well as the simultaneous selection of the number of mixing components and relevant predictors. We propose a penalized likelihood method for both mixture components and regression coefficients that enable the simultaneous identification of significant variables and the determination of important mixture components in mixture of regression models. To avoid over-fitting and bias problems, we applied smoothly clipped absolute deviation (SCAD) penalties on the logarithm of component probabilities suggested by Huang et al. (Statistical Sinica, 27, 147-169, 2013) as well as several well-known penalty functions for coefficients in regression models. Simulation studies reveal that our method is satisfactory with well-known penalties such as SCAD, MCP, and adaptive lasso.