• Title/Summary/Keyword: SVM algorithm

Search Result 637, Processing Time 0.026 seconds

Combining genetic algorithms and support vector machines for bankruptcy prediction

  • Min, Sung-Hwan;Lee, Ju-Min;Han, In-Goo
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 2004.11a
    • /
    • pp.179-188
    • /
    • 2004
  • Bankruptcy prediction is an important and widely studied topic since it can have significant impact on bank lending decisions and profitability. Recently, support vector machine (SVM) has been applied to the problem of bankruptcy prediction. The SVM-based method has been compared with other methods such as neural network, logistic regression and has shown good results. Genetic algorithm (GA) has been increasingly applied in conjunction with other AI techniques such as neural network, CBR. However, few studies have dealt with integration of GA and SVM, though there is a great potential for useful applications in this area. This study proposes the methods for improving SVM performance in two aspects: feature subset selection and parameter optimization. GA is used to optimize both feature subset and parameters of SVM simultaneously for bankruptcy prediction.

  • PDF

Multiclass SVM Model with Order Information

  • Ahn, Hyun-Chul;Kim, Kyoung-Jae
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.6 no.4
    • /
    • pp.331-334
    • /
    • 2006
  • Original Support Vsctor Machines (SVMs) by Vapnik were used for binary classification problems. Some researchers have tried to extend original SVM to multiclass classification. However, their studies have only focused on classifying samples into nominal categories. This study proposes a novel multiclass SVM model in order to handle ordinal multiple classes. Our suggested model may use less classifiers but predict more accurately because it utilizes additional hidden information, the order of the classes. To validate our model, we apply it to the real-world bond rating case. In this study, we compare the results of the model to those of statistical and typical machine learning techniques, and another multi class SVM algorithm. The result shows that proposed model may improve classification performance in comparison to other typical multiclass classification algorithms.

A Machine Vision System for Inspection of Car Sunroof Using SVM Algorithm (SVM 학습 알고리즘을 이용한 자동차 썬루프 장치의 볼트 유무 검사 장비)

  • Kim, Giseok;Lee, Saac;Cho, Jae-Soo
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2013.05a
    • /
    • pp.289-292
    • /
    • 2013
  • 본 논문은 SVM(Support Vector Machine) 학습알고리즘을 이용하여 자동차 썬루프 장치의 볼트 유무를 검사하는 자동차 부품 검사 장비에 관한 것이다. 자동화 시스템은 높은 정밀도와 생산성을 위한 빠른 처리 속도를 요구한다. 이를 위해 본 논문에서는 선형 SVM 학습알고리즘을 활용하여 자동차 썬루프 장치의 볼트 유무를 검사하는 알고리즘을 개발하였다. SVM 알고리즘은 분류를 위한 알고리즘이지만 ROI(Region-Of-Interest) 내의 모든 윈도우에 대한 분류를 수행하여 검출기 역할을 할 수 있도록 한다. 볼트가 있는 경우와 볼트가 없는 경우가 아닌 네거티브 샘플을 확보하기 위해 검출 대상 물체 주변에서 다양한 네거티브 샘플들을 추출한다. 그 결과 물체가 예상 위치에서 다소 빗나가는 경우에도 볼트 유무를 판별할 수 있을 뿐 아니라 볼트의 위치까지 검출할 수 있고, 처리 속도에서 자동화 시스템이 요구하는 수준에 도달함을 실험 결과를 통해 검증한다.

Comparison Thai Word Sense Disambiguation Method

  • Modhiran, Teerapong;Kruatrachue, Boontee;Supnithi, Thepchai
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1307-1312
    • /
    • 2004
  • Word sense disambiguation is one of the most important problems in natural language processing research topics such as information retrieval and machine translation. Many approaches can be employed to resolve word ambiguity with a reasonable degree of accuracy. These strategies are: knowledge-based, corpus-based, and hybrid-based. This paper pays attention to the corpus-based strategy. The purpose of this paper is to compare three famous machine learning techniques, Snow, SVM and Naive Bayes in Word-Sense Disambiguation on Thai language. 10 ambiguous words are selected to test with word and POS features. The results show that SVM algorithm gives the best results in solving of Thai WSD and the accuracy rate is approximately 83-96%.

  • PDF

Improvement of rotor flux estimation performance of induction motor using Support Vector Machine $\epsilon$-insensitive Regression Method (Support Vector Machine $\epsilon$-insensitive Regression방법을 이용한 유도전동기의 회전자 자속추정 성능개선)

  • Han, Dong-Chang;Baek, Un-Jae;Kim, Seong-Rak;Park, Ju-Hyeon;Lee, Seok-Gyu;Park, Jeong-Il
    • Proceedings of the KIEE Conference
    • /
    • 2003.11b
    • /
    • pp.43-46
    • /
    • 2003
  • In this paper, a novel rotor flux estimation method of an induction motor using support vector machine(SVM) is presented. Two veil-known different flux models with respect to voltage and current are necessary to estimate the rotor flux of an induction motor. The theory of the SVM algorithm is based on statistical teaming theory. Training of SVH leads to a quadratic programming(QP) problem. The proposed SVM rotor flux estimator guarantees the improvement of performance in the transient and steady state in spite of parameter variation circumstance. The validity and the usefulness of Proposed algorithm are throughly verified through numerical simulation.

  • PDF

Support Vector Machine based Cluster Merging (Support Vector Machines 기반의 클러스터 결합 기법)

  • Choi, Byung-In;Rhee, Frank Chung-Hoon
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.14 no.3
    • /
    • pp.369-374
    • /
    • 2004
  • A cluster merging algorithm that merges convex clusters resulted by the Fuzzy Convex Clustering(FCC) method into non-convex clusters was proposed. This was achieved by proposing a fast and reliable distance measure between two convex clusters using Support Vector Machines(SVM) to improve accuracy and speed over other existing conventional methods. In doing so, it was possible to reduce cluster number without losing its representation of the data. In this paper, results for several data sets are given to show the validity of our distance measure and algorithm.

A Study on Performance Comparison of Machine Learning Algorithm for Scaffold Defect Classification (인공지지체 불량 분류를 위한 기계 학습 알고리즘 성능 비교에 관한 연구)

  • Lee, Song-Yeon;Huh, Yong Jeong
    • Journal of the Semiconductor & Display Technology
    • /
    • v.19 no.3
    • /
    • pp.77-81
    • /
    • 2020
  • In this paper, we create scaffold defect classification models using machine learning based data. We extract the characteristic from collected scaffold external images using USB camera. SVM, KNN, MLP algorithm of machine learning was using extracted features. Classification models of three type learned using train dataset. We created scaffold defect classification models using test dataset. We quantified the performance of defect classification models. We have confirmed that the SVM accuracy is 95%. So the best performance model is using SVM.

Behavior Learning and Evolution of Swarm Robot System using Q-learning and Cascade SVM (Q-learning과 Cascade SVM을 이용한 군집로봇의 행동학습 및 진화)

  • Seo, Sang-Wook;Yang, Hyun-Chang;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.2
    • /
    • pp.279-284
    • /
    • 2009
  • In swarm robot systems, each robot must behaves by itself according to the its states and environments, and if necessary, must cooperates with other robots in order to carry out a given task. Therefore it is essential that each robot has both learning and evolution ability to adapt the dynamic environments. In this paper, reinforcement learning method using many SVM based on structural risk minimization and distributed genetic algorithms is proposed for behavior learning and evolution of collective autonomous mobile robots. By distributed genetic algorithm exchanging the chromosome acquired under different environments by communication each robot can improve its behavior ability. Specially, in order to improve the performance of evolution, selective crossover using the characteristic of reinforcement learning that basis of Cascade SVM is adopted in this paper.

Combining Feature Fusion and Decision Fusion in Multimodal Biometric Authentication (다중 바이오 인증에서 특징 융합과 결정 융합의 결합)

  • Lee, Kyung-Hee
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.20 no.5
    • /
    • pp.133-138
    • /
    • 2010
  • We present a new multimodal biometric authentication method, which performs both feature-level fusion and decision-level fusion. After generating support vector machines for new features made by integrating face and voice features, the final decision for authentication is made by integrating decisions of face SVM classifier, voice SVM classifier and integrated features SVM clssifier. We justify our proposal by comparing our method with traditional one by experiments with XM2VTS multimodal database. The experiments show that our multilevel fusion algorithm gives higher recognition rate than the existing schemes.

Optimization of Support Vector Machines for Financial Forecasting (재무예측을 위한 Support Vector Machine의 최적화)

  • Kim, Kyoung-Jae;Ahn, Hyun-Chul
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.241-254
    • /
    • 2011
  • Financial time-series forecasting is one of the most important issues because it is essential for the risk management of financial institutions. Therefore, researchers have tried to forecast financial time-series using various data mining techniques such as regression, artificial neural networks, decision trees, k-nearest neighbor etc. Recently, support vector machines (SVMs) are popularly applied to this research area because they have advantages that they don't require huge training data and have low possibility of overfitting. However, a user must determine several design factors by heuristics in order to use SVM. For example, the selection of appropriate kernel function and its parameters and proper feature subset selection are major design factors of SVM. Other than these factors, the proper selection of instance subset may also improve the forecasting performance of SVM by eliminating irrelevant and distorting training instances. Nonetheless, there have been few studies that have applied instance selection to SVM, especially in the domain of stock market prediction. Instance selection tries to choose proper instance subsets from original training data. It may be considered as a method of knowledge refinement and it maintains the instance-base. This study proposes the novel instance selection algorithm for SVMs. The proposed technique in this study uses genetic algorithm (GA) to optimize instance selection process with parameter optimization simultaneously. We call the model as ISVM (SVM with Instance selection) in this study. Experiments on stock market data are implemented using ISVM. In this study, the GA searches for optimal or near-optimal values of kernel parameters and relevant instances for SVMs. This study needs two sets of parameters in chromosomes in GA setting : The codes for kernel parameters and for instance selection. For the controlling parameters of the GA search, the population size is set at 50 organisms and the value of the crossover rate is set at 0.7 while the mutation rate is 0.1. As the stopping condition, 50 generations are permitted. The application data used in this study consists of technical indicators and the direction of change in the daily Korea stock price index (KOSPI). The total number of samples is 2218 trading days. We separate the whole data into three subsets as training, test, hold-out data set. The number of data in each subset is 1056, 581, 581 respectively. This study compares ISVM to several comparative models including logistic regression (logit), backpropagation neural networks (ANN), nearest neighbor (1-NN), conventional SVM (SVM) and SVM with the optimized parameters (PSVM). In especial, PSVM uses optimized kernel parameters by the genetic algorithm. The experimental results show that ISVM outperforms 1-NN by 15.32%, ANN by 6.89%, Logit and SVM by 5.34%, and PSVM by 4.82% for the holdout data. For ISVM, only 556 data from 1056 original training data are used to produce the result. In addition, the two-sample test for proportions is used to examine whether ISVM significantly outperforms other comparative models. The results indicate that ISVM outperforms ANN and 1-NN at the 1% statistical significance level. In addition, ISVM performs better than Logit, SVM and PSVM at the 5% statistical significance level.