• Title/Summary/Keyword: Regression analysis method

Search Result 4,646, Processing Time 0.042 seconds

Outlier Identification in Regression Analysis using Projection Pursuit

  • Kim, Hyojung;Park, Chongsun
    • Communications for Statistical Applications and Methods
    • /
    • v.7 no.3
    • /
    • pp.633-641
    • /
    • 2000
  • In this paper, we propose a method to identify multiple outliers in regression analysis with only assumption of smoothness on the regression function. Our method uses single-linkage clustering algorithm and Projection Pursuit Regression (PPR). It was compared with existing methods using several simulated and real examples and turned out to be very useful in regression problem with the regression function which is far from linear.

  • PDF

Prediction of Effective Horsepower for G/T 4 ton Class Coast Fishing Boat Using Statistical Analysis (통계해석에 의한 G/T 4톤급 연안어선의 유효마력 추정)

  • Park, Chung-Hwan;Shim, Sang-Mog;Jo, Hyo-Jae
    • Journal of Ocean Engineering and Technology
    • /
    • v.23 no.6
    • /
    • pp.71-76
    • /
    • 2009
  • This paper describes a statistical analysis method for predicting a coast fishing boat's effective horsepower. The EHP estimation method for small coast fishing boats was developed, based on a statistical regression analysis of model test results in a circulating water channel. The statistical regression formula of a fishing boat's effective horsepower is determined from the regression analysis of the resistance test results for 15 actual coast fishing boats. This method was applied to the effective horsepower prediction of a G/T 4 ton class coast fishing boat. From the estimation of the effective horsepower using this regression formula and the experimental model test of the G/T 4 ton class coast fishing boat, the estimation accuracy was verified under 10 percent of the design speed. However, the effective horsepower prediction method for coast fishing boats using the regression formula will be used at the initial design and hull-form development stage.

Settlement Prediction Accuracy Analysis of Weighted Nonlinear Regression Hyperbolic Method According to the Weighting Method (가중치 부여 방법에 따른 가중 비선형 회귀 쌍곡선법의 침하 예측 정확도 분석)

  • Kwak, Tae-Young ;Woo, Sang-Inn;Hong, Seongho ;Lee, Ju-Hyung;Baek, Sung-Ha
    • Journal of the Korean Geotechnical Society
    • /
    • v.39 no.4
    • /
    • pp.45-54
    • /
    • 2023
  • The settlement prediction during the design phase is primarily conducted using theoretical methods. However, measurement-based settlement prediction methods that predict future settlements based on measured settlement data over time are primarily used during construction due to accuracy issues. Among these methods, the hyperbolic method is commonly used. However, the existing hyperbolic method has accuracy issues and statistical limitations. Therefore, a weighted nonlinear regression hyperbolic method has been proposed. In this study, two weighting methods were applied to the weighted nonlinear regression hyperbolic method to compare and analyze the accuracy of settlement prediction. Measured settlement plate data from two sites located in Busan New Port were used. The settlement of the remaining sections was predicted by setting the regression analysis section to 30%, 50%, and 70% of the total data. Thus, regardless of the weight assignment method, the settlement prediction based on the hyperbolic method demonstrated a remarkable increase in accuracy as the regression analysis section increased. The weighted nonlinear regression hyperbolic method predicted settlement more accurately than the existing linear regression hyperbolic method. In particular, despite a smaller regression analysis section, the weighted nonlinear regression hyperbolic method showed higher settlement prediction performance than the existing linear regression hyperbolic method. Thus, it was confirmed that the weighted nonlinear regression hyperbolic method could predict settlement much faster and more accurately.

On Logistic Regression Analysis Using Propensity Score Matching (성향점수매칭 방법을 사용한 로지스틱 회귀분석에 관한 연구)

  • Kim, So Youn;Baek, Jong Il
    • Journal of Applied Reliability
    • /
    • v.16 no.4
    • /
    • pp.323-330
    • /
    • 2016
  • Purpose: Recently, propensity score matching method is used in a large number of research paper, nonetheless, there is no research using fitness test of before and after propensity score matching. Therefore, comparing fitness of before and after propensity score matching by logistic regression analysis using data from 'online survey of adolescent health' is the main significance of this research. Method: Data that has similar propensity in two groups is extracted by using propensity score matching then implement logistic regression analysis on before and after matching separately. Results: To test fitness of logistic regression analysis model, we use Model summary, -2Log Likelihood and Hosmer-Lomeshow methods. As a result, it is confirmed that the data after matching is more suitable for logistic regression analysis than data before matching. Conclusion: Therefore, better result which has appropriate fitness will be shown by using propensity score matching shows better result which has better fitness.

ON THEIL'S METHOD IN FUZZY LINEAR REGRESSION MODELS

  • Choi, Seung Hoe;Jung, Hye-Young;Lee, Woo-Joo;Yoon, Jin Hee
    • Communications of the Korean Mathematical Society
    • /
    • v.31 no.1
    • /
    • pp.185-198
    • /
    • 2016
  • Regression analysis is an analyzing method of regression model to explain the statistical relationship between explanatory variable and response variables. This paper propose a fuzzy regression analysis applying Theils method which is not sensitive to outliers. This method use medians of rate of increment based on randomly chosen pairs of each components of ${\alpha}$-level sets of fuzzy data in order to estimate the coefficients of fuzzy regression model. An example and two simulation results are given to show fuzzy Theils estimator is more robust than the fuzzy least squares estimator.

DC Motor Control using Regression Equation and PID Controller (회귀방정식과 PID제어기에 의한 DC모터 제어)

  • 서기영;이수흠;문상필;이내일;최종수
    • Proceedings of the Korea Institute of Convergence Signal Processing
    • /
    • 2000.08a
    • /
    • pp.129-132
    • /
    • 2000
  • We propose a new method to deal with the optimized auto-tuning for the PID controller which is used to the process -control in various fields. First of all, in this method, initial values of DC motor are determined by the Ziegler-Nichols method. Finally, after studying the parameters of PID controller by input vector of multiple regression analysis, when we give new K, L, T values to multiple regression model, the optimized parameters of PID controller is found by multiple regression analysis program.

  • PDF

Hybrid Fuzzy Least Squares Support Vector Machine Regression for Crisp Input and Fuzzy Output

  • Shim, Joo-Yong;Seok, Kyung-Ha;Hwang, Chang-Ha
    • Communications for Statistical Applications and Methods
    • /
    • v.17 no.2
    • /
    • pp.141-151
    • /
    • 2010
  • Hybrid fuzzy regression analysis is used for integrating randomness and fuzziness into a regression model. Least squares support vector machine(LS-SVM) has been very successful in pattern recognition and function estimation problems for crisp data. This paper proposes a new method to evaluate hybrid fuzzy linear and nonlinear regression models with crisp inputs and fuzzy output using weighted fuzzy arithmetic(WFA) and LS-SVM. LS-SVM allows us to perform fuzzy nonlinear regression analysis by constructing a fuzzy linear regression function in a high dimensional feature space. The proposed method is not computationally expensive since its solution is obtained from a simple linear equation system. In particular, this method is a very attractive approach to modeling nonlinear data, and is nonparametric method in the sense that we do not have to assume the underlying model function for fuzzy nonlinear regression model with crisp inputs and fuzzy output. Experimental results are then presented which indicate the performance of this method.

Regression analysis and recursive identification of the regression model with unknown operational parameter variables, and its application to sequential design

  • Huang, Zhaoqing;Yang, Shiqiong;Sagara, Setsuo
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1990.10b
    • /
    • pp.1204-1209
    • /
    • 1990
  • This paper offers the theory and method for regression analysis of the regression model with operational parameter variables based on the fundamentals of mathematical statistics. Regression coefficients are usually constants related to the problem of regression analysis. This paper considers that regression coefficients are not constants but the functions of some operational parameter variables. This is a kind of method of two-step fitting regression model. The second part of this paper considers the experimental step numbers as recursive variables, the recursive identification with unknown operational parameter variables, which includes two recursive variables, is deduced. Then the optimization and the recursive identification are combined to obtain the sequential experiment optimum design with operational parameter variables. This paper also offers a fast recursive algorithm for a large number of sequential experiments.

  • PDF

Simple principal component analysis using Lasso (라소를 이용한 간편한 주성분분석)

  • Park, Cheolyong
    • Journal of the Korean Data and Information Science Society
    • /
    • v.24 no.3
    • /
    • pp.533-541
    • /
    • 2013
  • In this study, a simple principal component analysis using Lasso is proposed. This method consists of two steps. The first step is to compute principal components by the principal component analysis. The second step is to regress each principal component on the original data matrix by Lasso regression method. Each of new principal components is computed as the linear combination of original data matrix using the scaled estimated Lasso regression coefficient as the coefficients of the combination. This method leads to easily interpretable principal components with more 0 coefficients by the properties of Lasso regression models. This is because the estimator of the regression of each principal component on the original data matrix is the corresponding eigenvector. This method is applied to real and simulated data sets with the help of an R package for Lasso regression and its usefulness is demonstrated.

Robustness of model averaging methods for the violation of standard linear regression assumptions

  • Lee, Yongsu;Song, Juwon
    • Communications for Statistical Applications and Methods
    • /
    • v.28 no.2
    • /
    • pp.189-204
    • /
    • 2021
  • In a regression analysis, a single best model is usually selected among several candidate models. However, it is often useful to combine several candidate models to achieve better performance, especially, in the prediction viewpoint. Model combining methods such as stacking and Bayesian model averaging (BMA) have been suggested from the perspective of averaging candidate models. When the candidate models include a true model, it is expected that BMA generally gives better performance than stacking. On the other hand, when candidate models do not include the true model, it is known that stacking outperforms BMA. Since stacking and BMA approaches have different properties, it is difficult to determine which method is more appropriate under other situations. In particular, it is not easy to find research papers that compare stacking and BMA when regression model assumptions are violated. Therefore, in the paper, we compare the performance among model averaging methods as well as a single best model in the linear regression analysis when standard linear regression assumptions are violated. Simulations were conducted to compare model averaging methods with the linear regression when data include outliers and data do not include them. We also compared them when data include errors from a non-normal distribution. The model averaging methods were applied to the water pollution data, which have a strong multicollinearity among variables. Simulation studies showed that the stacking method tends to give better performance than BMA or standard linear regression analysis (including the stepwise selection method) in the sense of risks (see (3.1)) or prediction error (see (3.2)) when typical linear regression assumptions are violated.