• Title/Summary/Keyword: SVM parameter

Search Result 77, Processing Time 0.023 seconds

Predicting Defect-Prone Software Module Using GA-SVM (GA-SVM을 이용한 결함 경향이 있는 소프트웨어 모듈 예측)

  • Kim, Young-Ok;Kwon, Ki-Tae
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.1
    • /
    • pp.1-6
    • /
    • 2013
  • For predicting defect-prone module in software, SVM classifier showed good performance in a previous research. But there are disadvantages that SVM parameter should be chosen differently for every kernel, and algorithm should be performed iteratively for predict results of changed parameter. Therefore, we find these parameters using Genetic Algorithm and compare with result of classification by Backpropagation Algorithm. As a result, the performance of GA-SVM model is better.

Analysis of target classification performances of active sonar returns depending on parameter values of SVM kernel functions (SVM 커널함수의 파라미터 값에 따른 능동소나 표적신호의 식별 성능 분석)

  • Park, Jeonghyun;Hwang, Chansik;Bae, Keunsung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.17 no.5
    • /
    • pp.1083-1088
    • /
    • 2013
  • Detection and classification of undersea mines in shallow waters using active sonar returns is a difficult task due to complexity of underwater environment. Support vector machine(SVM) is a binary classifier that is well known to provide a global optimum solution. In this paper, classification experiments of sonar returns from mine-like objects and non-mine-like objects are carried out using the SVM, and classification performance is analyzed and presented with discussions depending on parameter values of SVM kernel functions.

Robust Feature Parameter for Implementation of Speech Recognizer Using Support Vector Machines (SVM음성인식기 구현을 위한 강인한 특징 파라메터)

  • 김창근;박정원;허강인
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.3
    • /
    • pp.195-200
    • /
    • 2004
  • In this paper we propose effective speech recognizer through two recognition experiments. In general, SVM is classification method which classify two class set by finding voluntary nonlinear boundary in vector space and possesses high classification performance under few training data number. In this paper we compare recognition performance of HMM and SVM at training data number and investigate recognition performance of each feature parameter while changing feature space of MFCC using Independent Component Analysis(ICA) and Principal Component Analysis(PCA). As a result of experiment, recognition performance of SVM is better than 1:.um under few training data number, and feature parameter by ICA showed the highest recognition performance because of superior linear classification.

Fine-tuning SVM for Enhancing Speech/Music Classification (SVM의 미세조정을 통한 음성/음악 분류 성능향상)

  • Lim, Chung-Soo;Song, Ji-Hyun;Chang, Joon-Hyuk
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.2
    • /
    • pp.141-148
    • /
    • 2011
  • Support vector machines have been extensively studied and utilized in pattern recognition area for years. One of interesting applications of this technique is music/speech classification for a standardized codec such as 3GPP2 selectable mode vocoder. In this paper, we propose a novel approach that improves the speech/music classification of support vector machines. While conventional support vector machine optimization techniques apply during training phase, the proposed technique can be adopted in classification phase. In this regard, the proposed approach can be developed and employed in parallel with conventional optimizations, resulting in synergistic boost in classification performance. We first analyze the impact of kernel width parameter on the classifications made by support vector machines. From this analysis, we observe that we can fine-tune outputs of support vector machines with the kernel width parameter. To make the most of this capability, we identify strong correlation among neighboring input frames, and use this correlation information as a guide to adjusting kernel width parameter. According to the experimental results, the proposed algorithm is found to have potential for improving the performance of support vector machines.

Selection of Kernels and its Parameters in Applying SVM to ASV (온라인 서명 검증을 위한 SVM의 커널 함수와 결정 계수 선택)

  • Fan, Yunhe;Woo, Young-Woon;Kim, Seong-Hoon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2015.10a
    • /
    • pp.1045-1046
    • /
    • 2015
  • When using the Support Vector Machine in the online signature verification, SVM kernel function should be chosen to use non-linear SVM and the constant parameters in the kernel functions should be adjusted to appropriate values to reduce the error rate of signature verification. Non-linear SVM which is built on a strong mathematical basis shows better performance of classification with the higher discrimination power. However, choosing the kernel function and adjusting constant parameter values depend on the heuristics of the problem domain. In the signature verification, this paper deals with the problems of selecting the correct kernel function and constant parameters' values, and shows the kernel function and coefficient parameter's values with the minimum error rate. As a result of this research, we expect the average error rate to be less than 1%.

  • PDF

Adoption of Support Vector Machine and Independent Component Analysis for Implementation of Speech Recognizer (음성인식기 구현을 위한 SVM과 독립성분분석 기법의 적용)

  • 박정원;김평환;김창근;허강인
    • Proceedings of the IEEK Conference
    • /
    • 2003.07e
    • /
    • pp.2164-2167
    • /
    • 2003
  • In this paper we propose effective speech recognizer through recognition experiments for three feature parameters(PCA, ICA and MFCC) using SVM(Support Vector Machine) classifier In general, SVM is classification method which classify two class set by finding voluntary nonlinear boundary in vector space and possesses high classification performance under few training data number. In this paper we compare recognition result for each feature parameter and propose ICA feature as the most effective parameter

  • PDF

A Note on Linear SVM in Gaussian Classes

  • Jeon, Yongho
    • Communications for Statistical Applications and Methods
    • /
    • v.20 no.3
    • /
    • pp.225-233
    • /
    • 2013
  • The linear support vector machine(SVM) is motivated by the maximal margin separating hyperplane and is a popular tool for binary classification tasks. Many studies exist on the consistency properties of SVM; however, it is unknown whether the linear SVM is consistent for estimating the optimal classification boundary even in the simple case of two Gaussian classes with a common covariance, where the optimal classification boundary is linear. In this paper we show that the linear SVM can be inconsistent in the univariate Gaussian classification problem with a common variance, even when the best tuning parameter is used.

Survey on Nucleotide Encoding Techniques and SVM Kernel Design for Human Splice Site Prediction

  • Bari, A.T.M. Golam;Reaz, Mst. Rokeya;Choi, Ho-Jin;Jeong, Byeong-Soo
    • Interdisciplinary Bio Central
    • /
    • v.4 no.4
    • /
    • pp.14.1-14.6
    • /
    • 2012
  • Splice site prediction in DNA sequence is a basic search problem for finding exon/intron and intron/exon boundaries. Removing introns and then joining the exons together forms the mRNA sequence. These sequences are the input of the translation process. It is a necessary step in the central dogma of molecular biology. The main task of splice site prediction is to find out the exact GT and AG ended sequences. Then it identifies the true and false GT and AG ended sequences among those candidate sequences. In this paper, we survey research works on splice site prediction based on support vector machine (SVM). The basic difference between these research works is nucleotide encoding technique and SVM kernel selection. Some methods encode the DNA sequence in a sparse way whereas others encode in a probabilistic manner. The encoded sequences serve as input of SVM. The task of SVM is to classify them using its learning model. The accuracy of classification largely depends on the proper kernel selection for sequence data as well as a selection of kernel parameter. We observe each encoding technique and classify them according to their similarity. Then we discuss about kernel and their parameter selection. Our survey paper provides a basic understanding of encoding approaches and proper kernel selection of SVM for splice site prediction.

Investigations on the Optimal Support Vector Machine Classifiers for Predicting Design Feasibility in Analog Circuit Optimization

  • Lee, Jiho;Kim, Jaeha
    • JSTS:Journal of Semiconductor Technology and Science
    • /
    • v.15 no.5
    • /
    • pp.437-444
    • /
    • 2015
  • In simulation-based circuit optimization, many simulation runs may be wasted while evaluating infeasible designs, i.e. the designs that do not meet the constraints. To avoid such a waste, this paper investigates the use of support vector machine (SVM) classifiers in predicting the design's feasibility prior to simulation and the optimal selection of the SVM parameters, namely, the Gaussian kernel shape parameter ${\gamma}$ and the misclassification penalty parameter C. These parameters affect the complexity as well as the accuracy of the model that SVM represents. For instance, the higher ${\gamma}$ is good for detailed modeling and the higher C is good for rejecting noise in the training set. However, our empirical study shows that a low ${\gamma}$ value is preferable due to the high spatial correlation among the circuit design candidates while C has negligible impacts due to the smooth and clean constraint boundaries of most circuit designs. The experimental results with an LC-tank oscillator example show that an optimal selection of these parameters can improve the prediction accuracy from 80 to 98% and model complexity by $10{\times}$.

Research on prediction and analysis of supercritical water heat transfer coefficient based on support vector machine

  • Ma Dongliang;Li Yi;Zhou Tao;Huang Yanping
    • Nuclear Engineering and Technology
    • /
    • v.55 no.11
    • /
    • pp.4102-4111
    • /
    • 2023
  • In order to better perform thermal hydraulic calculation and analysis of supercritical water reactor, based on the experimental data of supercritical water, the model training and predictive analysis of the heat transfer coefficient of supercritical water were carried out by using the support vector machine (SVM) algorithm. The changes in the prediction accuracy of the supercritical water heat transfer coefficient are analyzed by the changes of the regularization penalty parameter C, the slack variable epsilon and the Gaussian kernel function parameter gamma. The predicted value of the SVM model obtained after parameter optimization and the actual experimental test data are analyzed for data verification. The research results show that: the normalization of the data has a great influence on the prediction results. The slack variable has a relatively small influence on the accuracy change range of the predicted heat transfer coefficient. The change of gamma has the greatest impact on the accuracy of the heat transfer coefficient. Compared with the calculation results of traditional empirical formula methods, the trained algorithm model using SVM has smaller average error and standard deviations. Using the SVM trained algorithm model, the heat transfer coefficient of supercritical water can be effectively predicted and analyzed.