• Title/Summary/Keyword: Regression method

Search Result 7,305, Processing Time 0.039 seconds

Clustering Observations for Detecting Multiple Outliers in Regression Models

  • Seo, Han-Son;Yoon, Min
    • The Korean Journal of Applied Statistics
    • /
    • v.25 no.3
    • /
    • pp.503-512
    • /
    • 2012
  • Detecting outliers in a linear regression model eventually fails when similar observations are classified differently in a sequential process. In such circumstances, identifying clusters and applying certain methods to the clustered data can prevent a failure to detect outliers and is computationally efficient due to the reduction of data. In this paper, we suggest to implement a clustering procedure for this purpose and provide examples that illustrate the suggested procedure applied to the Hadi-Simonoff (1993) method, reverse Hadi-Simonoff method, and Gentleman-Wilk (1975) method.

Modelling Online Word-of-Mouth Effect on Korean Box-Office Sales Based on Kernel Regression Model

  • Park, Si-Yun;Kim, Jin-Gyo
    • Journal of the Korean Data and Information Science Society
    • /
    • v.18 no.4
    • /
    • pp.995-1004
    • /
    • 2007
  • In this paper, we analyse online word-of-mouth and Korean box-office sales data based on kernel regression method. To do this, we consider the regression model with mixed-data and apply the least square cross-validation method proposed by Li and Racine (2004) to the model. We found the box-office sales can be explained by volume of online word-of-mouth and the characteristics of the movies.

  • PDF

DC Motor Control using Regression Equation and PID Controller (회귀방정식과 PID제어기에 의한 DC모터 제어)

  • 서기영;이수흠;문상필;이내일;최종수
    • Proceedings of the Korea Institute of Convergence Signal Processing
    • /
    • 2000.08a
    • /
    • pp.129-132
    • /
    • 2000
  • We propose a new method to deal with the optimized auto-tuning for the PID controller which is used to the process -control in various fields. First of all, in this method, initial values of DC motor are determined by the Ziegler-Nichols method. Finally, after studying the parameters of PID controller by input vector of multiple regression analysis, when we give new K, L, T values to multiple regression model, the optimized parameters of PID controller is found by multiple regression analysis program.

  • PDF

Wage Determinants Analysis by Quantile Regression Tree

  • Chang, Young-Jae
    • Communications for Statistical Applications and Methods
    • /
    • v.19 no.2
    • /
    • pp.293-301
    • /
    • 2012
  • Quantile regression proposed by Koenker and Bassett (1978) is a statistical technique that estimates conditional quantiles. The advantage of using quantile regression is the robustness in response to large outliers compared to ordinary least squares(OLS) regression. A regression tree approach has been applied to OLS problems to fit flexible models. Loh (2002) proposed the GUIDE algorithm that has a negligible selection bias and relatively low computational cost. Quantile regression can be regarded as an analogue of OLS, therefore it can also be applied to GUIDE regression tree method. Chaudhuri and Loh (2002) proposed a nonparametric quantile regression method that blends key features of piecewise polynomial quantile regression and tree-structured regression based on adaptive recursive partitioning. Lee and Lee (2006) investigated wage determinants in the Korean labor market using the Korean Labor and Income Panel Study(KLIPS). Following Lee and Lee, we fit three kinds of quantile regression tree models to KLIPS data with respect to the quantiles, 0.05, 0.2, 0.5, 0.8, and 0.95. Among the three models, multiple linear piecewise quantile regression model forms the shortest tree structure, while the piecewise constant quantile regression model has a deeper tree structure with more terminal nodes in general. Age, gender, marriage status, and education seem to be the determinants of the wage level throughout the quantiles; in addition, education experience appears as the important determinant of the wage level in the highly paid group.

Fuzzy Linear Regression Using Distribution Free Method (분포무관추정량을 이용한 퍼지회귀모형)

  • Yoon, Jin-Hee;Choi, Seung-Hoe
    • Communications for Statistical Applications and Methods
    • /
    • v.16 no.5
    • /
    • pp.781-790
    • /
    • 2009
  • This paper deals with a rank transformation method and a Theil's method based on an ${\alpha}$-level set of a fuzzy number to construct a fuzzy linear regression model. The rank transformation method is a simple procedure where the data are merely replaced with their corresponding ranks, and the Theil's method uses the median of all estimates of the parameter calculated from selected pairs of observations. We also consider two numerical examples to evaluate effectiveness of the fuzzy regression model using the proposed method and of another fuzzy regression model using the least square method.

Polynomial Boundary Treatment for Wavelet Regression

  • Oh Hee-Seok;Naveau Philppe;Lee GeungHee
    • Proceedings of the Korean Statistical Society Conference
    • /
    • 2000.11a
    • /
    • pp.27-32
    • /
    • 2000
  • To overcome boundary problems with wavelet regression, we propose a simple method that reduces bias at the boundaries. It is based on a combination of wavelet functions and low-order polynomials. The utility of the method is illustrated with simulation studies and a real example. Asymptotic results show that the estimators are competitive with other nonparametric procedures.

  • PDF

Weighted LS-SVM Regression for Right Censored Data

  • Kim, Dae-Hak;Jeong, Hyeong-Chul
    • Communications for Statistical Applications and Methods
    • /
    • v.13 no.3
    • /
    • pp.765-776
    • /
    • 2006
  • In this paper we propose an estimation method on the regression model with randomly censored observations of the training data set. The weighted least squares support vector machine regression is applied for the regression function estimation by incorporating the weights assessed upon each observation in the optimization problem. Numerical examples are given to show the performance of the proposed estimation method.

Hidden Truncation Normal Regression

  • Kim, Sungsu
    • Communications for Statistical Applications and Methods
    • /
    • v.19 no.6
    • /
    • pp.793-798
    • /
    • 2012
  • In this paper, we propose regression methods based on the likelihood function. We assume Arnold-Beaver Skew Normal(ABSN) errors in a simple linear regression model. It was shown that the novel method performs better with an asymmetric data set compared to the usual regression model with the Gaussian errors. The utility of a novel method is demonstrated through simulation and real data sets.

Semisupervised support vector quantile regression

  • Seok, Kyungha
    • Journal of the Korean Data and Information Science Society
    • /
    • v.26 no.2
    • /
    • pp.517-524
    • /
    • 2015
  • Unlabeled examples are easier and less expensive to be obtained than labeled examples. In this paper semisupervised approach is used to utilize such examples in an effort to enhance the predictive performance of nonlinear quantile regression problems. We propose a semisupervised quantile regression method named semisupervised support vector quantile regression, which is based on support vector machine. A generalized approximate cross validation method is used to choose the hyper-parameters that affect the performance of estimator. The experimental results confirm the successful performance of the proposed S2SVQR.

REGRESSION WITH CENSORED DATA BY LEAST SQUARES SUPPORT VECTOR MACHINE

  • Kim, Dae-Hak;Shim, Joo-Yong;Oh, Kwang-Sik
    • Journal of the Korean Statistical Society
    • /
    • v.33 no.1
    • /
    • pp.25-34
    • /
    • 2004
  • In this paper we propose a prediction method on the regression model with randomly censored observations of the training data set. The least squares support vector machine regression is applied for the regression function prediction by incorporating the weights assessed upon each observation in the optimization problem. Numerical examples are given to show the performance of the proposed prediction method.