• Title/Summary/Keyword: Regression Algorithm

Search Result 1,065, Processing Time 0.036 seconds

Fuzzy c-Regression Using Weighted LS-SVM

  • Hwang, Chang-Ha
    • 한국데이터정보과학회:학술대회논문집
    • /
    • 2005.10a
    • /
    • pp.161-169
    • /
    • 2005
  • In this paper we propose a fuzzy c-regression model based on weighted least squares support vector machine(LS-SVM), which can be used to detect outliers in the switching regression model while preserving simultaneous yielding the estimates of outputs together with a fuzzy c-partitions of data. It can be applied to the nonlinear regression which does not have an explicit form of the regression function. We illustrate the new algorithm with examples which indicate how it can be used to detect outliers and fit the mixed data to the nonlinear regression models.

  • PDF

A Comparative Study on the Genetic Algorithm and Regression Analysis in Urban Population Surface Modeling (도시인구분포모형 개발을 위한 GA모형과 회귀모형의 적합성 비교연구)

  • Choei, Nae-Young
    • Spatial Information Research
    • /
    • v.18 no.5
    • /
    • pp.107-117
    • /
    • 2010
  • Taking the East-Hwasung area as the case, this study first builds gridded population data based on the municipal population survey raw data, and then measures, by way of GIS tools, the major urban spatial variables that are thought to influence the composition of the regional population. For the purpose of comparison, the urban models based on the Genetic Algorithm technique and the regression technique are constructed using the same input variables. The findings indicate that the GA output performed better in differentiating the effective variables among the pilot model variables, and predicted as much consistent and meaningful coefficient estimates for the explanatory variables as the regression models. The study results indicate that GA technique could be a very useful and supplementary research tool in understanding the urban phenomena.

Outlier Identification in Regression Analysis using Projection Pursuit

  • Kim, Hyojung;Park, Chongsun
    • Communications for Statistical Applications and Methods
    • /
    • v.7 no.3
    • /
    • pp.633-641
    • /
    • 2000
  • In this paper, we propose a method to identify multiple outliers in regression analysis with only assumption of smoothness on the regression function. Our method uses single-linkage clustering algorithm and Projection Pursuit Regression (PPR). It was compared with existing methods using several simulated and real examples and turned out to be very useful in regression problem with the regression function which is far from linear.

  • PDF

A Hybrid Algorithm for Identifying Multiple Outlers in Linear Regression

  • Kim, Bu-yong;Kim, Hee-young
    • Communications for Statistical Applications and Methods
    • /
    • v.9 no.1
    • /
    • pp.291-304
    • /
    • 2002
  • This article is concerned with an effective algorithm for the identification of multiple outliers in linear regression. It proposes a hybrid algorithm which employs the least median of squares estimator, instead of the least squares estimator, to construct an Initial clean subset in the stepwise forward search scheme. The performance of the proposed algorithm is evaluated and compared with the existing competitor via an extensive Monte Carlo simulation. The algorithm appears to be superior to the competitor for the most of scenarios explored in the simulation study. Particularly it copes with the masking problem quite well. In addition, the orthogonal decomposition and Its updating techniques are considered to improve the computational efficiency and numerical stability of the algorithm.

Optimized Neural Network Weights and Biases Using Particle Swarm Optimization Algorithm for Prediction Applications

  • Ahmadzadeh, Ezat;Lee, Jieun;Moon, Inkyu
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.8
    • /
    • pp.1406-1420
    • /
    • 2017
  • Artificial neural networks (ANNs) play an important role in the fields of function approximation, prediction, and classification. ANN performance is critically dependent on the input parameters, including the number of neurons in each layer, and the optimal values of weights and biases assigned to each neuron. In this study, we apply the particle swarm optimization method, a popular optimization algorithm for determining the optimal values of weights and biases for every neuron in different layers of the ANN. Several regression models, including general linear regression, Fourier regression, smoothing spline, and polynomial regression, are conducted to evaluate the proposed method's prediction power compared to multiple linear regression (MLR) methods. In addition, residual analysis is conducted to evaluate the optimized ANN accuracy for both training and test datasets. The experimental results demonstrate that the proposed method can effectively determine optimal values for neuron weights and biases, and high accuracy results are obtained for prediction applications. Evaluations of the proposed method reveal that it can be used for prediction and estimation purposes, with a high accuracy ratio, and the designed model provides a reliable technique for optimization. The simulation results show that the optimized ANN exhibits superior performance to MLR for prediction purposes.

Support Vector Machine for Interval Regression

  • Hong Dug Hun;Hwang Changha
    • Proceedings of the Korean Statistical Society Conference
    • /
    • 2004.11a
    • /
    • pp.67-72
    • /
    • 2004
  • Support vector machine (SVM) has been very successful in pattern recognition and function estimation problems for crisp data. This paper proposes a new method to evaluate interval linear and nonlinear regression models combining the possibility and necessity estimation formulation with the principle of SVM. For data sets with crisp inputs and interval outputs, the possibility and necessity models have been recently utilized, which are based on quadratic programming approach giving more diverse spread coefficients than a linear programming one. SVM also uses quadratic programming approach whose another advantage in interval regression analysis is to be able to integrate both the property of central tendency in least squares and the possibilistic property In fuzzy regression. However this is not a computationally expensive way. SVM allows us to perform interval nonlinear regression analysis by constructing an interval linear regression function in a high dimensional feature space. In particular, SVM is a very attractive approach to model nonlinear interval data. The proposed algorithm here is model-free method in the sense that we do not have to assume the underlying model function for interval nonlinear regression model with crisp inputs and interval output. Experimental results are then presented which indicate the performance of this algorithm.

  • PDF

Density Adaptive Grid-based k-Nearest Neighbor Regression Model for Large Dataset (대용량 자료에 대한 밀도 적응 격자 기반의 k-NN 회귀 모형)

  • Liu, Yiqi;Uk, Jung
    • Journal of Korean Society for Quality Management
    • /
    • v.49 no.2
    • /
    • pp.201-211
    • /
    • 2021
  • Purpose: This paper proposes a density adaptive grid algorithm for the k-NN regression model to reduce the computation time for large datasets without significant prediction accuracy loss. Methods: The proposed method utilizes the concept of the grid with centroid to reduce the number of reference data points so that the required computation time is much reduced. Since the grid generation process in this paper is based on quantiles of original variables, the proposed method can fully reflect the density information of the original reference data set. Results: Using five real-life datasets, the proposed k-NN regression model is compared with the original k-NN regression model. The results show that the proposed density adaptive grid-based k-NN regression model is superior to the original k-NN regression in terms of data reduction ratio and time efficiency ratio, and provides a similar prediction error if the appropriate number of grids is selected. Conclusion: The proposed density adaptive grid algorithm for the k-NN regression model is a simple and effective model which can help avoid a large loss of prediction accuracy with faster execution speed and fewer memory requirements during the testing phase.

Research on the Least Mean Square Algorithm Based on Equivalent Wiener-Hopf Equation (등가의 Wiener-Hopf 방정식을 이용한 LMS 알고리즘에 관한 연구)

  • Ahn, Bong-Man;Hwang, Jee-Won;Cho, Ju-Phil
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.33 no.5C
    • /
    • pp.403-412
    • /
    • 2008
  • This paper presents the methods which obtain the solution of Wiener-Hopf equation by LMS algorithm and get the coefficient of TDL filter in lattice filter directly. For this result, we apply an orthogonal input signal generated by lattice filter into an equivalent Wiener-Hopf equation and shows the scheme that can obtain the solution by using the MMSE algorithm. Conventionally, the method like aforementioned scheme can get an error and regression coefficient recursively. However, in this paper, we can obtain an error and the coefficients of TDL filter recursively. And, we make an theoretical analysis on the convergence characteristics of the proposed algorithm. Then we can see that the result is similar to conventional analysis. Also, by computer simulation, we can make sure that the proposed algorithm has an excellent performance.

Learning algorithms for big data logistic regression on RHIPE platform (RHIPE 플랫폼에서 빅데이터 로지스틱 회귀를 위한 학습 알고리즘)

  • Jung, Byung Ho;Lim, Dong Hoon
    • Journal of the Korean Data and Information Science Society
    • /
    • v.27 no.4
    • /
    • pp.911-923
    • /
    • 2016
  • Machine learning becomes increasingly important in the big data era. Logistic regression is a type of classification in machine leaning, and has been widely used in various fields, including medicine, economics, marketing, and social sciences. Rhipe that integrates R and Hadoop environment, has not been discussed by many researchers owing to the difficulty of its installation and MapReduce implementation. In this paper, we present the MapReduce implementation of Gradient Descent algorithm and Newton-Raphson algorithm for logistic regression using Rhipe. The Newton-Raphson algorithm does not require a learning rate, while Gradient Descent algorithm needs to manually pick a learning rate. We choose the learning rate by performing the mixed procedure of grid search and binary search for processing big data efficiently. In the performance study, our Newton-Raphson algorithm outpeforms Gradient Descent algorithm in all the tested data.

A Noble Decoding Algorithm Using MLLR Adaptation for Speaker Verification (MLLR 화자적응 기법을 이용한 새로운 화자확인 디코딩 알고리듬)

  • 김강열;김지운;정재호
    • The Journal of the Acoustical Society of Korea
    • /
    • v.21 no.2
    • /
    • pp.190-198
    • /
    • 2002
  • In general, we have used the Viterbi algorithm of Speech recognition for decoding. But a decoder in speaker verification has to recognize same word of every speaker differently. In this paper, we propose a noble decoding algorithm that could replace the typical Viterbi algorithm for the speaker verification system. We utilize for the proposed algorithm the speaker adaptation algorithms that transform feature vectors into the region of the client' characteristics in the speech recognition. There are many adaptation algorithms, but we take MLLR (Maximum Likelihood Linear Regression) and MAP (Maximum A-Posterior) adaptation algorithms for proposed algorithm. We could achieve improvement of performance about 30% of EER (Equal Error Rate) using proposed algorithm instead of the typical Viterbi algorithm.