• 제목/요약/키워드: Support vector machine(regression)

검색결과 381건 처리시간 0.03초

Geographically weighted least squares-support vector machine

  • Hwang, Changha;Shim, Jooyong
    • Journal of the Korean Data and Information Science Society
    • /
    • 제28권1호
    • /
    • pp.227-235
    • /
    • 2017
  • When the spatial information of each location is given specifically as coordinates it is popular to use the geographically weighted regression to incorporate the spatial information by assuming that the regression parameters vary spatially across locations. In this paper, we relax the linearity assumption of geographically weighted regression and propose a geographically weighted least squares-support vector machine for estimating geographically weighted mean by using the basic concept of kernel machines. Generalized cross validation function is induced for the model selection. Numerical studies with real datasets have been conducted to compare the performance of proposed method with other methods for predicting geographically weighted mean.

Fuzzy c-Regression Using Weighted LS-SVM

  • Hwang, Chang-Ha
    • 한국데이터정보과학회:학술대회논문집
    • /
    • 한국데이터정보과학회 2005년도 추계학술대회
    • /
    • pp.161-169
    • /
    • 2005
  • In this paper we propose a fuzzy c-regression model based on weighted least squares support vector machine(LS-SVM), which can be used to detect outliers in the switching regression model while preserving simultaneous yielding the estimates of outputs together with a fuzzy c-partitions of data. It can be applied to the nonlinear regression which does not have an explicit form of the regression function. We illustrate the new algorithm with examples which indicate how it can be used to detect outliers and fit the mixed data to the nonlinear regression models.

  • PDF

Support Vector Machine을 이용한 기업부도예측 (Bankruptcy Prediction using Support Vector Machines)

  • 박정민;김경재;한인구
    • Asia pacific journal of information systems
    • /
    • 제15권2호
    • /
    • pp.51-63
    • /
    • 2005
  • There has been substantial research into the bankruptcy prediction. Many researchers used the statistical method in the problem until the early 1980s. Since the late 1980s, Artificial Intelligence(AI) has been employed in bankruptcy prediction. And many studies have shown that artificial neural network(ANN) achieved better performance than traditional statistical methods. However, despite ANN's superior performance, it has some problems such as overfitting and poor explanatory power. To overcome these limitations, this paper suggests a relatively new machine learning technique, support vector machine(SVM), to bankruptcy prediction. SVM is simple enough to be analyzed mathematically, and leads to high performances in practical applications. The objective of this paper is to examine the feasibility of SVM in bankruptcy prediction by comparing it with ANN, logistic regression, and multivariate discriminant analysis. The experimental results show that SVM provides a promising alternative to bankruptcy prediction.

Multiclass Classification via Least Squares Support Vector Machine Regression

  • Shim, Joo-Yong;Bae, Jong-Sig;Hwang, Chang-Ha
    • Communications for Statistical Applications and Methods
    • /
    • 제15권3호
    • /
    • pp.441-450
    • /
    • 2008
  • In this paper we propose a new method for solving multiclass problem with least squares support vector machine(LS-SVM) regression. This method implements one-against-all scheme which is as accurate as any other approach. We also propose cross validation(CV) method to select effectively the optimal values of hyper-parameters which affect the performance of the proposed multiclass method. Experimental results are then presented which indicate the performance of the proposed multiclass method.

A Study on the Support Vector Machine Based Fuzzy Time Series Model

  • Seok, Kyung-Ha
    • Journal of the Korean Data and Information Science Society
    • /
    • 제17권3호
    • /
    • pp.821-830
    • /
    • 2006
  • This paper develops support vector based fuzzy linear and nonlinear regression models and applies it to forecasting the exchange rate. We use the result of Tanaka(1982, 1987) for crisp input and output. The model makes it possible to forecast the best and worst possible situation based on fewer than 50 observations. We show that the developed model is good through real data.

  • PDF

서포트 벡터 회귀를 이용한 제어기 설계 (Design of controller using Support Vector Regression)

  • 황지환;곽환주;박귀태
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2009년도 정보 및 제어 심포지움 논문집
    • /
    • pp.320-322
    • /
    • 2009
  • Support vector learning attracts great interests in the areas of pattern classification, function approximation, and abnormality detection. In this pater, we design the controller using support vector regression which has good properties in comparison with multi-layer perceptron or radial basis function. The applicability of the presented method is illustrated via an example simulation.

  • PDF

Improvement of Support Vector Clustering using Evolutionary Programming and Bootstrap

  • Jun, Sung-Hae
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제8권3호
    • /
    • pp.196-201
    • /
    • 2008
  • Statistical learning theory has three analytical tools which are support vector machine, support vector regression, and support vector clustering for classification, regression, and clustering respectively. In general, their performances are good because they are constructed by convex optimization. But, there are some problems in the methods. One of the problems is the subjective determination of the parameters for kernel function and regularization by the arts of researchers. Also, the results of the learning machines are depended on the selected parameters. In this paper, we propose an efficient method for objective determination of the parameters of support vector clustering which is the clustering method of statistical learning theory. Using evolutionary algorithm and bootstrap method, we select the parameters of kernel function and regularization constant objectively. To verify improved performances of proposed research, we compare our method with established learning algorithms using the data sets form ucr machine learning repository and synthetic data.

Prediction Intervals for LS-SVM Regression using the Bootstrap

  • Shim, Joo-Yong;Hwang, Chang-Ha
    • Journal of the Korean Data and Information Science Society
    • /
    • 제14권2호
    • /
    • pp.337-343
    • /
    • 2003
  • In this paper we present the prediction interval estimation method using bootstrap method for least squares support vector machine(LS-SVM) regression, which allows us to perform even nonlinear regression by constructing a linear regression function in a high dimensional feature space. The bootstrap method is applied to generate the bootstrap sample for estimation of the covariance of the regression parameters consisting of the optimal bias and Lagrange multipliers. Experimental results are then presented which indicate the performance of this algorithm.

  • PDF

Weighted LS-SVM Regression for Right Censored Data

  • Kim, Dae-Hak;Jeong, Hyeong-Chul
    • Communications for Statistical Applications and Methods
    • /
    • 제13권3호
    • /
    • pp.765-776
    • /
    • 2006
  • In this paper we propose an estimation method on the regression model with randomly censored observations of the training data set. The weighted least squares support vector machine regression is applied for the regression function estimation by incorporating the weights assessed upon each observation in the optimization problem. Numerical examples are given to show the performance of the proposed estimation method.

Expected shortfall estimation using kernel machines

  • Shim, Jooyong;Hwang, Changha
    • Journal of the Korean Data and Information Science Society
    • /
    • 제24권3호
    • /
    • pp.625-636
    • /
    • 2013
  • In this paper we study four kernel machines for estimating expected shortfall, which are constructed through combinations of support vector quantile regression (SVQR), restricted SVQR (RSVQR), least squares support vector machine (LS-SVM) and support vector expectile regression (SVER). These kernel machines have obvious advantages such that they achieve nonlinear model but they do not require the explicit form of nonlinear mapping function. Moreover they need no assumption about the underlying probability distribution of errors. Through numerical studies on two artificial an two real data sets we show their effectiveness on the estimation performance at various confidence levels.