• Title/Summary/Keyword: 벌점회귀

Search Result 27, Processing Time 0.026 seconds

Relative Error Prediction via Penalized Regression (벌점회귀를 통한 상대오차 예측방법)

  • Jeong, Seok-Oh;Lee, Seo-Eun;Shin, Key-Il
    • The Korean Journal of Applied Statistics
    • /
    • v.28 no.6
    • /
    • pp.1103-1111
    • /
    • 2015
  • This paper presents a new prediction method based on relative error incorporated with a penalized regression. The proposed method consists of fully data-driven procedures that is fast, simple, and easy to implement. An example of real data analysis and some simulation results were given to prove that the proposed approach works in practice.

Joint penalization of components and predictors in mixture of regressions (혼합회귀모형에서 콤포넌트 및 설명변수에 대한 벌점함수의 적용)

  • Park, Chongsun;Mo, Eun Bi
    • The Korean Journal of Applied Statistics
    • /
    • v.32 no.2
    • /
    • pp.199-211
    • /
    • 2019
  • This paper is concerned with issues in the finite mixture of regression modeling as well as the simultaneous selection of the number of mixing components and relevant predictors. We propose a penalized likelihood method for both mixture components and regression coefficients that enable the simultaneous identification of significant variables and the determination of important mixture components in mixture of regression models. To avoid over-fitting and bias problems, we applied smoothly clipped absolute deviation (SCAD) penalties on the logarithm of component probabilities suggested by Huang et al. (Statistical Sinica, 27, 147-169, 2013) as well as several well-known penalty functions for coefficients in regression models. Simulation studies reveal that our method is satisfactory with well-known penalties such as SCAD, MCP, and adaptive lasso.

Penalized logistic regression models for determining the discharge of dyspnea patients (호흡곤란 환자 퇴원 결정을 위한 벌점 로지스틱 회귀모형)

  • Park, Cheolyong;Kye, Myo Jin
    • Journal of the Korean Data and Information Science Society
    • /
    • v.24 no.1
    • /
    • pp.125-133
    • /
    • 2013
  • In this paper, penalized binary logistic regression models are employed as statistical models for determining the discharge of 668 patients with a chief complaint of dyspnea based on 11 blood tests results. Specifically, the ridge model based on $L^2$ penalty and the Lasso model based on $L^1$ penalty are considered in this paper. In the comparison of prediction accuracy, our models are compared with the logistic regression models with all 11 explanatory variables and the selected variables by variable selection method. The results show that the prediction accuracy of the ridge logistic regression model is the best among 4 models based on 10-fold cross-validation.

An Outlier Detection Method in Penalized Spline Regression Models (벌점 스플라인 회귀모형에서의 이상치 탐지방법)

  • Seo, Han Son;Song, Ji Eun;Yoon, Min
    • The Korean Journal of Applied Statistics
    • /
    • v.26 no.4
    • /
    • pp.687-696
    • /
    • 2013
  • The detection and the examination of outliers are important parts of data analysis because some outliers in the data may have a detrimental effect on statistical analysis. Outlier detection methods have been discussed by many authors. In this article, we propose to apply Hadi and Simonoff's (1993) method to penalized spline a regression model to detect multiple outliers. Simulated data sets and real data sets are used to illustrate and compare the proposed procedure to a penalized spline regression and a robust penalized spline regression.

Sufficient conditions for the oracle property in penalized linear regression (선형 회귀모형에서 벌점 추정량의 신의 성질에 대한 충분조건)

  • Kwon, Sunghoon;Moon, Hyeseong;Chang, Jaeho;Lee, Sangin
    • The Korean Journal of Applied Statistics
    • /
    • v.34 no.2
    • /
    • pp.279-293
    • /
    • 2021
  • In this paper, we introduce how to construct sufficient conditions for the oracle property in penalized linear regression model. We give formal definitions of the oracle estimator, penalized estimator, oracle penalized estimator, and the oracle property of the oracle estimator. Based on the definitions, we present a unified way of constructing optimality conditions for the oracle property and sufficient conditions for the optimality conditions that covers most of the existing penalties. In addition, we present an illustrative example and results from the numerical study.

Variable Selection in PLS Regression with Penalty Function (벌점함수를 이용한 부분최소제곱 회귀모형에서의 변수선택)

  • Park, Chong-Sun;Moon, Guy-Jong
    • Communications for Statistical Applications and Methods
    • /
    • v.15 no.4
    • /
    • pp.633-642
    • /
    • 2008
  • Variable selection algorithm for partial least square regression using penalty function is proposed. We use the fact that usual partial least square regression problem can be expressed as a maximization problem with appropriate constraints and we will add penalty function to this maximization problem. Then simulated annealing algorithm can be used in searching for optimal solutions of above maximization problem with penalty functions added. The HARD penalty function would be suggested as the best in several aspects. Illustrations with real and simulated examples are provided.

Detection of multiple change points using penalized least square methods: a comparative study between ℓ0 and ℓ1 penalty (벌점-최소제곱법을 이용한 다중 변화점 탐색)

  • Son, Won;Lim, Johan;Yu, Donghyeon
    • The Korean Journal of Applied Statistics
    • /
    • v.29 no.6
    • /
    • pp.1147-1154
    • /
    • 2016
  • In this paper, we numerically compare two penalized least square methods, the ${\ell}_0$-penalized method and the fused lasso regression (FLR, ${\ell}_1$ penalization), in finding multiple change points of a signal. We find that the ${\ell}_0$-penalized method performs better than the FLR, which produces many false detections in some cases as the theory tells. In addition, the computation of ${\ell}_0$-penalized method relies on dynamic programming and is as efficient as the FLR.

A study on bias effect of LASSO regression for model selection criteria (모형 선택 기준들에 대한 LASSO 회귀 모형 편의의 영향 연구)

  • Yu, Donghyeon
    • The Korean Journal of Applied Statistics
    • /
    • v.29 no.4
    • /
    • pp.643-656
    • /
    • 2016
  • High dimensional data are frequently encountered in various fields where the number of variables is greater than the number of samples. It is usually necessary to select variables to estimate regression coefficients and avoid overfitting in high dimensional data. A penalized regression model simultaneously obtains variable selection and estimation of coefficients which makes them frequently used for high dimensional data. However, the penalized regression model also needs to select the optimal model by choosing a tuning parameter based on the model selection criterion. This study deals with the bias effect of LASSO regression for model selection criteria. We numerically describes the bias effect to the model selection criteria and apply the proposed correction to the identification of biomarkers for lung cancer based on gene expression data.

Model selection for unstable AR process via the adaptive LASSO (비정상 자기회귀모형에서의 벌점화 추정 기법에 대한 연구)

  • Na, Okyoung
    • The Korean Journal of Applied Statistics
    • /
    • v.32 no.6
    • /
    • pp.909-922
    • /
    • 2019
  • In this paper, we study the adaptive least absolute shrinkage and selection operator (LASSO) for the unstable autoregressive (AR) model. To identify the existence of the unit root, we apply the adaptive LASSO to the augmented Dickey-Fuller regression model, not the original AR model. We illustrate our method with simulations and a real data analysis. Simulation results show that the adaptive LASSO obtained by minimizing the Bayesian information criterion selects the order of the autoregressive model as well as the degree of differencing with high accuracy.

Penalized quantile regression tree (벌점화 분위수 회귀나무모형에 대한 연구)

  • Kim, Jaeoh;Cho, HyungJun;Bang, Sungwan
    • The Korean Journal of Applied Statistics
    • /
    • v.29 no.7
    • /
    • pp.1361-1371
    • /
    • 2016
  • Quantile regression provides a variety of useful statistical information to examine how covariates influence the conditional quantile functions of a response variable. However, traditional quantile regression (which assume a linear model) is not appropriate when the relationship between the response and the covariates is a nonlinear. It is also necessary to conduct variable selection for high dimensional data or strongly correlated covariates. In this paper, we propose a penalized quantile regression tree model. The split rule of the proposed method is based on residual analysis, which has a negligible bias to select a split variable and reasonable computational cost. A simulation study and real data analysis are presented to demonstrate the satisfactory performance and usefulness of the proposed method.