• Title/Summary/Keyword: Support vector machine (SVR)

Search Result 63, Processing Time 0.01 seconds

A Differential Evolution based Support Vector Clustering (차분진화 기반의 Support Vector Clustering)

  • Jun, Sung-Hae
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.17 no.5
    • /
    • pp.679-683
    • /
    • 2007
  • Statistical learning theory by Vapnik consists of support vector machine(SVM), support vector regression(SVR), and support vector clustering(SVC) for classification, regression, and clustering respectively. In this algorithms, SVC is good clustering algorithm using support vectors based on Gaussian kernel function. But, similar to SVM and SVR, SVC needs to determine kernel parameters and regularization constant optimally. In general, the parameters have been determined by the arts of researchers and grid search which is demanded computing time heavily. In this paper, we propose a differential evolution based SVC(DESVC) which combines differential evolution into SVC for efficient selection of kernel parameters and regularization constant. To verify improved performance of our DESVC, we make experiments using the data sets from UCI machine learning repository and simulation.

Estimating Software Development Cost using Support Vector Regression (Support Vector Regression을 이용한 소프트웨어 개발비 예측)

  • Park, Chan-Kyoo
    • Korean Management Science Review
    • /
    • v.23 no.2
    • /
    • pp.75-91
    • /
    • 2006
  • The purpose of this paper is to propose a new software development cost estimation method using SVR(Support Vector Regression) SVR, one of machine learning techniques, has been attracting much attention for its theoretic clearness and food performance over other machine learning techniques. This paper may be the first study in which SVR is applied to the field of software cost estimation. To derive the new method, we analyze historical cost data including both well-known overseas and domestic software projects, and define cost drivers affecting software cost. Then, the SVR model is trained using the historical data and its estimation accuracy is compared with that of the linear regression model. Experimental results show that the SVR model produces more accurate prediction than the linear regression model.

Semiparametric Nu-Support Vector Regression (정해진 기저함수가 포함되는 Nu-SVR 학습방법)

  • 김영일;조원희;박주영
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2003.05a
    • /
    • pp.81-84
    • /
    • 2003
  • $\varepsilon$-SVR(e-Support Vector Regression)학습방법은 SV(Support Vector)들을 이용하여 함수 근사(Regression)하는 방법으로 최근 주목받고 있는 기법이다. SVM(SV machine)의 한 가지 방법으로, 신경망을 기반으로 한 다른 알고리즘들이 학습과정에서 지역적 최적해로 수렴하는 등의 문제를 한계로 갖는데 반해, 이러한 구조들을 대체할 수 있는 학습방법으로 사용될 수 있다. 일반적인 $\varepsilon$-SVR에서는 학습 데이터와 관사 함수 f사이에 허용 가능한 에러범위 $\varepsilon$값이 학습하기 전에 정해진다. 그러나 Nu-SVR(ν-version SVR)학습방법은 학습의 결과로 최적화 된 $\varepsilon$값을 얻을 수 있다. 정해진 기저함수가 포함되는 $\varepsilon$-SVR 학습방법(Sermparametric SVR)은 정해진 독립 기저함수를 사용하여 함수를 근사하는 방법으로, 일반적인 $\varepsilon$-SVR 학습방범에 비해 우수한 결과를 나타내는 것이 성공적으로 입증된 바 있다. 이에 따라, 본 논문에서는 정해진 기저함수가 포함된 ν-SVR 학습 방법을 제안하고, 이에 대한 수식을 유도하였다. 그리고, 모의 실험을 통하여 제안된 Sermparametric ν-SVR 학습 방법의 적용 가능성을 알아보았다.

  • PDF

Semi-supervised regression based on support vector machine

  • Seok, Kyungha
    • Journal of the Korean Data and Information Science Society
    • /
    • v.25 no.2
    • /
    • pp.447-454
    • /
    • 2014
  • In many practical machine learning and data mining applications, unlabeled training examples are readily available but labeled ones are fairly expensive to obtain. Therefore semi-supervised learning algorithms have attracted much attentions. However, previous research mainly focuses on classication problems. In this paper, a semi-supervised regression method based on support vector regression (SVR) formulation that is proposed. The estimator is easily obtained via the dual formulation of the optimization problem. The experimental results with simulated and real data suggest superior performance of the our proposed method compared with standard SVR.

A Reliability Prediction Method for Weapon Systems using Support Vector Regression (지지벡터회귀분석을 이용한 무기체계 신뢰도 예측기법)

  • Na, Il-Yong
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.16 no.5
    • /
    • pp.675-682
    • /
    • 2013
  • Reliability analysis and prediction of next failure time is critical to sustain weapon systems, concerning scheduled maintenance, spare parts replacement and maintenance interventions, etc. Since 1981, many methodology derived from various probabilistic and statistical theories has been suggested to do that activity. Nowadays, many A.I. tools have been used to support these predictions. Support Vector Regression(SVR) is a nonlinear regression technique extended from support vector machine. SVR can fit data flexibly and it has a wide variety of applications. This paper utilizes SVM and SVR with combining time series to predict the next failure time based on historical failure data. A numerical case using failure data from the military equipment is presented to demonstrate the performance of the proposed approach. Finally, the proposed approach is proved meaningful to predict next failure point and to estimate instantaneous failure rate and MTBF.

Forecasting Exchange Rates using Support Vector Machine Regression

  • Chen, Shi-Yi;Jeong, Ki-Ho
    • 한국데이터정보과학회:학술대회논문집
    • /
    • 2005.04a
    • /
    • pp.155-163
    • /
    • 2005
  • This paper applies Support Vector Regression (SVR) to estimate and forecast nonlinear autoregressive integrated (ARI) model of the daily exchange rates of four currencies (Swiss Francs, Indian Rupees, South Korean Won and Philippines Pesos) against U.S. dollar. The forecasting abilities of SVR are compared with linear ARI model which is estimated by OLS. Sensitivity of SVR results are also examined to kernel type and other free parameters. Empirical findings are in favor of SVR. SVR method forecasts exchange rate level better than linear ARI model and also has superior ability in forecasting the exchange rates direction in short test phase but has similar performance with OLS when forecasting the turning points in long test phase.

  • PDF

Support Vector Regression based on Immune Algorithm for Software Cost Estimation (소프트웨어 비용산정을 위한 면역 알고리즘 기반의 서포트 벡터 회귀)

  • Kwon, Ki-Tae;Lee, Joon-Gil
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.7
    • /
    • pp.17-24
    • /
    • 2009
  • Increasing use of information system has led to larger amount of developing expenses and demands on software. Until recent days, the model using regression analysis based on statistical algorithm has been used. However, Machine learning is more investigated now. This paper estimates the software cost using SVR(Support Vector Regression). a sort of machine learning technique. Also, it finds the best set of parameters applying immune algorithm. In this paper, software cost estimation is performed by SVR based on immune algorithm while changing populations, memory cells, and number of allele. Finally, this paper analyzes and compares the result with existing other machine learning methods.

Runoff Prediction from Machine Learning Models Coupled with Empirical Mode Decomposition: A case Study of the Grand River Basin in Canada

  • Parisouj, Peiman;Jun, Changhyun;Nezhad, Somayeh Moghimi;Narimani, Roya
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2022.05a
    • /
    • pp.136-136
    • /
    • 2022
  • This study investigates the possibility of coupling empirical mode decomposition (EMD) for runoff prediction from machine learning (ML) models. Here, support vector regression (SVR) and convolutional neural network (CNN) were considered for ML algorithms. Precipitation (P), minimum temperature (Tmin), maximum temperature (Tmax) and their intrinsic mode functions (IMF) values were used for input variables at a monthly scale from Jan. 1973 to Dec. 2020 in the Grand river basin, Canada. The support vector machine-recursive feature elimination (SVM-RFE) technique was applied for finding the best combination of predictors among input variables. The results show that the proposed method outperformed the individual performance of SVR and CNN during the training and testing periods in the study area. According to the correlation coefficient (R), the EMD-SVR model outperformed the EMD-CNN model in both training and testing even though the CNN indicated a better performance than the SVR before using IMF values. The EMD-SVR model showed higher improvement in R value (38.7%) than that from the EMD-CNN model (7.1%). It should be noted that the coupled models of EMD-SVR and EMD-CNN represented much higher accuracy in runoff prediction with respect to the considered evaluation indicators, including root mean square error (RMSE) and R values.

  • PDF

Generalized Support Vector Quantile Regression (일반화 서포트벡터 분위수회귀에 대한 연구)

  • Lee, Dongju;Choi, Sujin
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.43 no.4
    • /
    • pp.107-115
    • /
    • 2020
  • Support vector regression (SVR) is devised to solve the regression problem by utilizing the excellent predictive power of Support Vector Machine. In particular, the ⲉ-insensitive loss function, which is a loss function often used in SVR, is a function thatdoes not generate penalties if the difference between the actual value and the estimated regression curve is within ⲉ. In most studies, the ⲉ-insensitive loss function is used symmetrically, and it is of interest to determine the value of ⲉ. In SVQR (Support Vector Quantile Regression), the asymmetry of the width of ⲉ and the slope of the penalty was controlled using the parameter p. However, the slope of the penalty is fixed according to the p value that determines the asymmetry of ⲉ. In this study, a new ε-insensitive loss function with p1 and p2 parameters was proposed. A new asymmetric SVR called GSVQR (Generalized Support Vector Quantile Regression) based on the new ε-insensitive loss function can control the asymmetry of the width of ⲉ and the slope of the penalty using the parameters p1 and p2, respectively. Moreover, the figures show that the asymmetry of the width of ⲉ and the slope of the penalty is controlled. Finally, through an experiment on a function, the accuracy of the existing symmetric Soft Margin, asymmetric SVQR, and asymmetric GSVQR was examined, and the characteristics of each were shown through figures.

Seismic response of soil-structure interaction using the support vector regression

  • Mirhosseini, Ramin Tabatabaei
    • Structural Engineering and Mechanics
    • /
    • v.63 no.1
    • /
    • pp.115-124
    • /
    • 2017
  • In this paper, a different technique to predict the effects of soil-structure interaction (SSI) on seismic response of building systems is investigated. The technique use a machine learning algorithm called Support Vector Regression (SVR) with technical and analytical results as input features. Normally, the effects of SSI on seismic response of existing building systems can be identified by different types of large data sets. Therefore, predicting and estimating the seismic response of building is a difficult task. It is possible to approximate a real valued function of the seismic response and make accurate investing choices regarding the design of building system and reduce the risk involved, by giving the right experimental and/or numerical data to a machine learning regression, such as SVR. The seismic response of both single-degree-of-freedom system and six-storey RC frame which can be represent of a broad range of existing structures, is estimated using proposed SVR model, while allowing flexibility of the soil-foundation system and SSI effects. The seismic response of both single-degree-of-freedom system and six-storey RC frame which can be represent of a broad range of existing structures, is estimated using proposed SVR model, while allowing flexibility of the soil-foundation system and SSI effects. The results show that the performance of the technique can be predicted by reducing the number of real data input features. Further, performance enhancement was achieved by optimizing the RBF kernel and SVR parameters through grid search.