• 제목/요약/키워드: regression statistics

검색결과 5,255건 처리시간 0.033초

Wavelet Estimation of Regression Functions with Errors in Variables

  • Kim, Woo-Chul;Koo, Ja-Yong
    • Communications for Statistical Applications and Methods
    • /
    • 제6권3호
    • /
    • pp.849-860
    • /
    • 1999
  • This paper addresses the issue of estimating regression function with errors in variables using wavelets. We adopt a nonparametric approach in assuming that the regression function has no specific parametric form, To account for errors in covariates deconvolution is involved in the construction of a new class of linear wavelet estimators. using the wavelet characterization of Besov spaces the question of regression estimation with Besov constraint can be reduced to a problem in a space of sequences. Rates of convergence are studied over Besov function classes $B_{spq}$ using $L_2$ error measure. It is shown that the rates of convergence depend on the smoothness s of the regression function and the decay rate of characteristic function of the contaminating error.

  • PDF

비선형회귀모형에서의 불안정성 (Instability in nonlinear regression model)

  • 박병무;김영일;장대흥
    • 응용통계연구
    • /
    • 제30권1호
    • /
    • pp.195-202
    • /
    • 2017
  • 가끔 비선형회귀분석에서 수치해를 사용시 불안정성을 보게 된다. 비선형회귀분석에서 모든 반복처리 방법들은 초기추정값을 요구한다. 그러나, 오차제곱합에 복수 개의 국소최소값이 존재하면 잘못된 초기추정값은 원하지 않는 정상점에 수렴하게 된다. 이런 경우 초기추정값은 카오스 현상을 일으킨다.

Simultaneous outlier detection and variable selection via difference-based regression model and stochastic search variable selection

  • Park, Jong Suk;Park, Chun Gun;Lee, Kyeong Eun
    • Communications for Statistical Applications and Methods
    • /
    • 제26권2호
    • /
    • pp.149-161
    • /
    • 2019
  • In this article, we suggest the following approaches to simultaneous variable selection and outlier detection. First, we determine possible candidates for outliers using properties of an intercept estimator in a difference-based regression model, and the information of outliers is reflected in the multiple regression model adding mean shift parameters. Second, we select the best model from the model including the outlier candidates as predictors using stochastic search variable selection. Finally, we evaluate our method using simulations and real data analysis to yield promising results. In addition, we need to develop our method to make robust estimates. We will also to the nonparametric regression model for simultaneous outlier detection and variable selection.

Comparison of tree-based ensemble models for regression

  • Park, Sangho;Kim, Chanmin
    • Communications for Statistical Applications and Methods
    • /
    • 제29권5호
    • /
    • pp.561-589
    • /
    • 2022
  • When multiple classifications and regression trees are combined, tree-based ensemble models, such as random forest (RF) and Bayesian additive regression trees (BART), are produced. We compare the model structures and performances of various ensemble models for regression settings in this study. RF learns bootstrapped samples and selects a splitting variable from predictors gathered at each node. The BART model is specified as the sum of trees and is calculated using the Bayesian backfitting algorithm. Throughout the extensive simulation studies, the strengths and drawbacks of the two methods in the presence of missing data, high-dimensional data, or highly correlated data are investigated. In the presence of missing data, BART performs well in general, whereas RF provides adequate coverage. The BART outperforms in high dimensional, highly correlated data. However, in all of the scenarios considered, the RF has a shorter computation time. The performance of the two methods is also compared using two real data sets that represent the aforementioned situations, and the same conclusion is reached.

대형 데이터에서 VIF회귀를 이용한 신속 강건 변수선택법 (Fast robust variable selection using VIF regression in large datasets)

  • 서한손
    • 응용통계연구
    • /
    • 제31권4호
    • /
    • pp.463-473
    • /
    • 2018
  • 연구에서는 선형회귀모형을 가정한 대형 데이터에서의 변수선택 알고리즘을 다룬다. 방법의 속도와 강건성에 주안점을 둔 여러 알고리즘들이 제안되었다. 그 중에서 streamwise 회귀 접근법을 사용한 VIF회귀는 신속하고 정확하게 수행된다. 그러나 VIF회귀는 최소제곱방법에 의해 모형이 추정되므로 이상치에 민감하다. 변수선택방법의 강건성을 높이기 위해 가중 추정치를 사용한 강건측도가 제안되었으며 강건 VIF회귀도 제안되었다. 본 연구에서는 잠재적 이상치를 탐지하여 제거한 후 VIF회귀를 수행하는, 빠르고 강건한 변수선택 방법을 제안한다. 제안된 방법은 모의실험과 데이터 분석 통해 다른 방법들과 비교된다.

Hybrid Fuzzy Least Squares Support Vector Machine Regression for Crisp Input and Fuzzy Output

  • Shim, Joo-Yong;Seok, Kyung-Ha;Hwang, Chang-Ha
    • Communications for Statistical Applications and Methods
    • /
    • 제17권2호
    • /
    • pp.141-151
    • /
    • 2010
  • Hybrid fuzzy regression analysis is used for integrating randomness and fuzziness into a regression model. Least squares support vector machine(LS-SVM) has been very successful in pattern recognition and function estimation problems for crisp data. This paper proposes a new method to evaluate hybrid fuzzy linear and nonlinear regression models with crisp inputs and fuzzy output using weighted fuzzy arithmetic(WFA) and LS-SVM. LS-SVM allows us to perform fuzzy nonlinear regression analysis by constructing a fuzzy linear regression function in a high dimensional feature space. The proposed method is not computationally expensive since its solution is obtained from a simple linear equation system. In particular, this method is a very attractive approach to modeling nonlinear data, and is nonparametric method in the sense that we do not have to assume the underlying model function for fuzzy nonlinear regression model with crisp inputs and fuzzy output. Experimental results are then presented which indicate the performance of this method.

Robustness of model averaging methods for the violation of standard linear regression assumptions

  • Lee, Yongsu;Song, Juwon
    • Communications for Statistical Applications and Methods
    • /
    • 제28권2호
    • /
    • pp.189-204
    • /
    • 2021
  • In a regression analysis, a single best model is usually selected among several candidate models. However, it is often useful to combine several candidate models to achieve better performance, especially, in the prediction viewpoint. Model combining methods such as stacking and Bayesian model averaging (BMA) have been suggested from the perspective of averaging candidate models. When the candidate models include a true model, it is expected that BMA generally gives better performance than stacking. On the other hand, when candidate models do not include the true model, it is known that stacking outperforms BMA. Since stacking and BMA approaches have different properties, it is difficult to determine which method is more appropriate under other situations. In particular, it is not easy to find research papers that compare stacking and BMA when regression model assumptions are violated. Therefore, in the paper, we compare the performance among model averaging methods as well as a single best model in the linear regression analysis when standard linear regression assumptions are violated. Simulations were conducted to compare model averaging methods with the linear regression when data include outliers and data do not include them. We also compared them when data include errors from a non-normal distribution. The model averaging methods were applied to the water pollution data, which have a strong multicollinearity among variables. Simulation studies showed that the stacking method tends to give better performance than BMA or standard linear regression analysis (including the stepwise selection method) in the sense of risks (see (3.1)) or prediction error (see (3.2)) when typical linear regression assumptions are violated.

Graphical Diagnostics for Logistic Regression

  • Lee, Hak-Bae
    • 한국통계학회:학술대회논문집
    • /
    • 한국통계학회 2003년도 춘계 학술발표회 논문집
    • /
    • pp.213-217
    • /
    • 2003
  • In this paper we discuss graphical and diagnostic methods for logistic regression, in which the response is the number of successes in a fixed number of trials.

  • PDF

CHARACTERIZATIONS OF PARETO, WEIBULL AND POWER FUNCTION DISTRIBUTIONS BASED ON GENERALIZED ORDER STATISTICS

  • Ahsanullah, Mohammad;Hamedani, G.G.
    • 충청수학회지
    • /
    • 제29권3호
    • /
    • pp.385-396
    • /
    • 2016
  • Characterizations of probability distributions by different regression conditions on generalized order statistics has attracted the attention of many researchers. We present here, characterization of Pareto and Weibull distributions based on the conditional expectation of generalized order statistics extending the characterization results reported by Jin and Lee (2014). We also present a characterization of the power function distribution based on the conditional expectation of lower generalized order statistics.

주성분회귀분석을 이용한 한국프로야구 순위 (Predicting Korea Pro-Baseball Rankings by Principal Component Regression Analysis)

  • 배재영;이진목;이제영
    • Communications for Statistical Applications and Methods
    • /
    • 제19권3호
    • /
    • pp.367-379
    • /
    • 2012
  • 야구경기에서 순위를 예측하는 것은 야구팬들에게 관심의 대상이 된다. 이러한 순위를 예측하기 위해서 2011년 한국프로야구 기록 자료를 바탕으로 산술평균방법, 가중평균방법, 주성분분석방법, 주성분회귀분석 방법을 제시한다. 표준화를 통한 산술평균, 상관계수를 이용한 가중평균과 주성분 분석을 이용해서 순위를 예측하고, 최종모형으로 주성분회귀분석 모형이 선택되었다. 주성분 분석으로 축약된 변수를 이용해서 회귀분석을 실시하여, 투수부분, 타자부분, 투수와 타자부분의 순위예측 모형을 제안한다. 예측된 회귀모형을 통해서 2012년도 순위 예측이 가능하다.