• Title/Summary/Keyword: Kernel machines

Search Result 86, Processing Time 0.026 seconds

Training of Support Vector Machines Using the Modified Kernel-adatron Algorithm (수정된 kernel-adatron 알고리즘에 의한 Support Vector Machines의 학습)

  • 조용현
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2000.04b
    • /
    • pp.469-471
    • /
    • 2000
  • 본 논문에서는 모멘트 항을 추가한 수정된 kernel-adatron 알고리즘을 제안하고 이른 support vector machines의 학습기법으로 이용하였다. 이는 기울기상승법에서 일어나는 최적해로의 수렴에 따른 발진을 억제하여 그 수렴 속도를 좀더 개선시키는 모멘트의 장점과 kernel-adatron 알고리즘의 구현용이성을 그대로 살리기 위함이다. 제안된 학습기법의 SVM을 실제 200명의 암환자를 2부류(초기와 악성)로 분류하여 문제에 적용하여 시뮬레이션한 결과, Cambell등의 kernel-adatron 알고리즘을 이용한 SVM의 결과와 비교할 때 학습시간과 시험 데이터의 분류률에서 더욱 우수한 성능이 있음을 확인할 수 있었다.

  • PDF

Truncated Kernel Projection Machine for Link Prediction

  • Huang, Liang;Li, Ruixuan;Chen, Hong
    • Journal of Computing Science and Engineering
    • /
    • v.10 no.2
    • /
    • pp.58-67
    • /
    • 2016
  • With the large amount of complex network data that is increasingly available on the Web, link prediction has become a popular data-mining research field. The focus of this paper is on a link-prediction task that can be formulated as a binary classification problem in complex networks. To solve this link-prediction problem, a sparse-classification algorithm called "Truncated Kernel Projection Machine" that is based on empirical-feature selection is proposed. The proposed algorithm is a novel way to achieve a realization of sparse empirical-feature-based learning that is different from those of the regularized kernel-projection machines. The algorithm is more appealing than those of the previous outstanding learning machines since it can be computed efficiently, and it is also implemented easily and stably during the link-prediction task. The algorithm is applied here for link-prediction tasks in different complex networks, and an investigation of several classification algorithms was performed for comparison. The experimental results show that the proposed algorithm outperformed the compared algorithms in several key indices with a smaller number of test errors and greater stability.

Hyperparameter Selection for APC-ECOC

  • Seok, Kyung-Ha
    • Journal of the Korean Data and Information Science Society
    • /
    • v.19 no.4
    • /
    • pp.1219-1231
    • /
    • 2008
  • The main object of this paper is to develop a leave-one-out(LOO) bound of all pairwise comparison error correcting output codes (APC-ECOC). To avoid using classifiers whose corresponding target values are 0 in APC-ECOC and requiring pilot estimates we developed a bound based on mean misclassification probability(MMP). It can be used to tune kernel hyperparameters. Our empirical experiment using kernel mean squared estimate(KMSE) as the binary classifier indicates that the bound leads to good estimates of kernel hyperparameters.

  • PDF

A Decision Support Model for Sustainable Collaboration Level on Supply Chain Management using Support Vector Machines (Support Vector Machines을 이용한 공급사슬관리의 지속적 협업 수준에 대한 의사결정모델)

  • Lim, Se-Hun
    • Journal of Distribution Research
    • /
    • v.10 no.3
    • /
    • pp.1-14
    • /
    • 2005
  • It is important to control performance and a Sustainable Collaboration (SC) for the successful Supply Chain Management (SCM). This research developed a control model which analyzed SCM performances based on a Balanced Scorecard (ESC) and an SC using Support Vector Machine (SVM). 108 specialists of an SCM completed the questionnaires. We analyzed experimental data set using SVM. This research compared the forecasting accuracy of an SCMSC through four types of SVM kernels: (1) linear, (2) polynomial (3) Radial Basis Function (REF), and (4) sigmoid kernel (linear > RBF > Sigmoid > Polynomial). Then, this study compares the prediction performance of SVM linear kernel with Artificial Neural Network. (ANN). The research findings show that using SVM linear kernel to forecast an SCMSC is the most outstanding. Thus SVM linear kernel provides a promising alternative to an SC control level. A company which pursues an SCM can use the information of an SC in the SVM model.

  • PDF

Two dimensional reduction technique of Support Vector Machines for Bankruptcy Prediction

  • Ahn, Hyun-Chul;Kim, Kyoung-Jae;Lee, Ki-Chun
    • 한국경영정보학회:학술대회논문집
    • /
    • 2007.06a
    • /
    • pp.608-613
    • /
    • 2007
  • Prediction of corporate bankruptcies has long been an important topic and has been studied extensively in the finance and management literature because it is an essential basis for the risk management of financial institutions. Recently, support vector machines (SVMs) are becoming popular as a tool for bankruptcy prediction because they use a risk function consisting of the empirical error and a regularized term which is derived from the structural risk minimization principle. In addition, they don't require huge training samples and have little possibility of overfitting. However. in order to Use SVM, a user should determine several factors such as the parameters ofa kernel function, appropriate feature subset, and proper instance subset by heuristics, which hinders accurate prediction results when using SVM In this study, we propose a novel hybrid SVM classifier with simultaneous optimization of feature subsets, instance subsets, and kernel parameters. This study introduces genetic algorithms (GAs) to optimize the feature selection, instance selection, and kernel parameters simultaneously. Our study applies the proposed model to the real-world case for bankruptcy prediction. Experimental results show that the prediction accuracy of conventional SVM may be improved significantly by using our model.

  • PDF

Comparison of Feature Selection Methods in Support Vector Machines (지지벡터기계의 변수 선택방법 비교)

  • Kim, Kwangsu;Park, Changyi
    • The Korean Journal of Applied Statistics
    • /
    • v.26 no.1
    • /
    • pp.131-139
    • /
    • 2013
  • Support vector machines(SVM) may perform poorly in the presence of noise variables; in addition, it is difficult to identify the importance of each variable in the resulting classifier. A feature selection can improve the interpretability and the accuracy of SVM. Most existing studies concern feature selection in the linear SVM through penalty functions yielding sparse solutions. Note that one usually adopts nonlinear kernels for the accuracy of classification in practice. Hence feature selection is still desirable for nonlinear SVMs. In this paper, we compare the performances of nonlinear feature selection methods such as component selection and smoothing operator(COSSO) and kernel iterative feature extraction(KNIFE) on simulated and real data sets.

Support vector quantile regression for autoregressive data

  • Hwang, Hyungtae
    • Journal of the Korean Data and Information Science Society
    • /
    • v.25 no.6
    • /
    • pp.1539-1547
    • /
    • 2014
  • In this paper we apply the autoregressive process to the nonlinear quantile regression in order to infer nonlinear quantile regression models for the autocorrelated data. We propose a kernel method for the autoregressive data which estimates the nonlinear quantile regression function by kernel machines. Artificial and real examples are provided to indicate the usefulness of the proposed method for the estimation of quantile regression function in the presence of autocorrelation between data.

Subtype classification of Human Breast Cancer via Kernel methods and Pattern Analysis of Clinical Outcome over the feature space (Kernel Methods를 이용한 Human Breast Cancer의 subtype의 분류 및 Feature space에서 Clinical Outcome의 pattern 분석)

  • Kim, Hey-Jin;Park, Seungjin;Bang, Sung-Uang
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2003.04c
    • /
    • pp.175-177
    • /
    • 2003
  • This paper addresses a problem of classifying human breast cancer into its subtypes. A main ingredient in our approach is kernel machines such as support vector machine (SVM). kernel principal component analysis (KPCA). and kernel partial least squares (KPLS). In the task of breast cancer classification, we employ both SVM and KPLS and compare their results. In addition to this classification. we also analyze the patterns of clinical outcomes in the feature space. In order to visualize the clinical outcomes in low-dimensional space, both KPCA and KPLS are used. It turns out that these methods are useful to identify correlations between clinical outcomes and the nonlinearly protected expression profiles in low-dimensional feature space.

  • PDF

An Early Warning Model for Student Status Based on Genetic Algorithm-Optimized Radial Basis Kernel Support Vector Machine

  • Hui Li;Qixuan Huang;Chao Wang
    • Journal of Information Processing Systems
    • /
    • v.20 no.2
    • /
    • pp.263-272
    • /
    • 2024
  • A model based on genetic algorithm optimization, GA-SVM, is proposed to warn university students of their status. This model improves the predictive effect of support vector machines. The genetic optimization algorithm is used to train the hyperparameters and adjust the kernel parameters, kernel penalty factor C, and gamma to optimize the support vector machine model, which can rapidly achieve convergence to obtain the optimal solution. The experimental model was trained on open-source datasets and validated through comparisons with random forest, backpropagation neural network, and GA-SVM models. The test results show that the genetic algorithm-optimized radial basis kernel support vector machine model GA-SVM can obtain higher accuracy rates when used for early warning in university learning.

Multi-User Detection using Support Vector Machines

  • Lee, Jung-Sik;Lee, Jae-Wan;Hwang, Jae-Jeong;Chung, Kyung-Taek
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.34 no.12C
    • /
    • pp.1177-1183
    • /
    • 2009
  • In this paper, support vector machines (SVM) are applied to multi-user detector (MUD) for direct sequence (DS)-CDMA system. This work shows an analytical performance of SVM based multi-user detector with some of kernel functions, such as linear, sigmoid, and Gaussian. The basic idea in SVM based training is to select the proper number of support vectors by maximizing the margin between two different classes. In simulation studies, the performance of SVM based MUD with different kernel functions is compared in terms of the number of selected support vectors, their corresponding decision boundary, and finally the bit error rate. It was found that controlling parameter, in SVM training have an effect, in some degree, to SVM based MUD with both sigmoid and Gaussian kernel. It is shown that SVM based MUD with Gaussian kernels outperforms those with other kernels.