• Title/Summary/Keyword: Kernel machine technique

Search Result 41, Processing Time 0.025 seconds

Estimating multiplicative competitive interaction model using kernel machine technique

  • Shim, Joo-Yong;Kim, Mal-Suk;Park, Hye-Jung
    • Journal of the Korean Data and Information Science Society
    • /
    • v.23 no.4
    • /
    • pp.825-832
    • /
    • 2012
  • We propose a novel way of forecasting the market shares of several brands simultaneously in a multiplicative competitive interaction model, which uses kernel regression technique incorporated with kernel machine technique applied in support vector machines and other machine learning techniques. Traditionally, the estimations of the market share attraction model are performed via a maximum likelihood estimation procedure under the assumption that the data are drawn from a normal distribution. The proposed method is shown to be a good candidate for forecasting method of the market share attraction model when normal distribution is not assumed. We apply the proposed method to forecast the market shares of 4 Korean car brands simultaneously and represent better performances than maximum likelihood estimation procedure.

M-quantile regression using kernel machine technique

  • Hwang, Chang-Ha
    • Journal of the Korean Data and Information Science Society
    • /
    • v.21 no.5
    • /
    • pp.973-981
    • /
    • 2010
  • Quantile regression investigates the quantiles of the conditional distribution of a response variable given a set of covariates. M-quantile regression extends this idea by a "quantile-like" generalization of regression based on influence functions. In this paper we propose a new method of estimating M-quantile regression functions, which uses kernel machine technique. Simulation studies are presented that show the finite sample properties of the proposed M-quantile regression.

On the Support Vector Machine with the kernel of the q-normal distribution

  • Joguchi, Hirofumi;Tanaka, Masaru
    • Proceedings of the IEEK Conference
    • /
    • 2002.07b
    • /
    • pp.983-986
    • /
    • 2002
  • Support Vector Machine (SVM) is one of the methods of pattern recognition that separate input data using hyperplane. This method has high capability of pattern recognition by using the technique, which says kernel trick, and the Radial basis function (RBF) kernel is usually used as a kernel function in kernel trick. In this paper we propose using the q-normal distribution to the kernel function, instead of conventional RBF, and compare two types of the kernel function.

  • PDF

Support Vector Bankruptcy Prediction Model with Optimal Choice of RBF Kernel Parameter Values using Grid Search (Support Vector Machine을 이용한 부도예측모형의 개발 -격자탐색을 이용한 커널 함수의 최적 모수 값 선정과 기존 부도예측모형과의 성과 비교-)

  • Min Jae H.;Lee Young-Chan
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.30 no.1
    • /
    • pp.55-74
    • /
    • 2005
  • Bankruptcy prediction has drawn a lot of research interests in previous literature, and recent studies have shown that machine learning techniques achieved better performance than traditional statistical ones. This paper employs a relatively new machine learning technique, support vector machines (SVMs). to bankruptcy prediction problem in an attempt to suggest a new model with better explanatory power and stability. To serve this purpose, we use grid search technique using 5-fold cross-validation to find out the optimal values of the parameters of kernel function of SVM. In addition, to evaluate the prediction accuracy of SVM. we compare its performance with multiple discriminant analysis (MDA), logistic regression analysis (Logit), and three-layer fully connected back-propagation neural networks (BPNs). The experiment results show that SVM outperforms the other methods.

Estimating GARCH models using kernel machine learning (커널기계 기법을 이용한 일반화 이분산자기회귀모형 추정)

  • Hwang, Chang-Ha;Shin, Sa-Im
    • Journal of the Korean Data and Information Science Society
    • /
    • v.21 no.3
    • /
    • pp.419-425
    • /
    • 2010
  • Kernel machine learning is gaining a lot of popularities in analyzing large or high dimensional nonlinear data. We use this technique to estimate a GARCH model for predicting the conditional volatility of stock market returns. GARCH models are usually estimated using maximum likelihood (ML) procedures, assuming that the data are normally distributed. In this paper, we show that GARCH models can be estimated using kernel machine learning and that kernel machine has a higher predicting ability than ML methods and support vector machine, when estimating volatility of financial time series data with fat tail.

Survey on Nucleotide Encoding Techniques and SVM Kernel Design for Human Splice Site Prediction

  • Bari, A.T.M. Golam;Reaz, Mst. Rokeya;Choi, Ho-Jin;Jeong, Byeong-Soo
    • Interdisciplinary Bio Central
    • /
    • v.4 no.4
    • /
    • pp.14.1-14.6
    • /
    • 2012
  • Splice site prediction in DNA sequence is a basic search problem for finding exon/intron and intron/exon boundaries. Removing introns and then joining the exons together forms the mRNA sequence. These sequences are the input of the translation process. It is a necessary step in the central dogma of molecular biology. The main task of splice site prediction is to find out the exact GT and AG ended sequences. Then it identifies the true and false GT and AG ended sequences among those candidate sequences. In this paper, we survey research works on splice site prediction based on support vector machine (SVM). The basic difference between these research works is nucleotide encoding technique and SVM kernel selection. Some methods encode the DNA sequence in a sparse way whereas others encode in a probabilistic manner. The encoded sequences serve as input of SVM. The task of SVM is to classify them using its learning model. The accuracy of classification largely depends on the proper kernel selection for sequence data as well as a selection of kernel parameter. We observe each encoding technique and classify them according to their similarity. Then we discuss about kernel and their parameter selection. Our survey paper provides a basic understanding of encoding approaches and proper kernel selection of SVM for splice site prediction.

Multi-Radial Basis Function SVM Classifier: Design and Analysis

  • Wang, Zheng;Yang, Cheng;Oh, Sung-Kwun;Fu, Zunwei
    • Journal of Electrical Engineering and Technology
    • /
    • v.13 no.6
    • /
    • pp.2511-2520
    • /
    • 2018
  • In this study, Multi-Radial Basis Function Support Vector Machine (Multi-RBF SVM) classifier is introduced based on a composite kernel function. In the proposed multi-RBF support vector machine classifier, the input space is divided into several local subsets considered for extremely nonlinear classification tasks. Each local subset is expressed as nonlinear classification subspace and mapped into feature space by using kernel function. The composite kernel function employs the dual RBF structure. By capturing the nonlinear distribution knowledge of local subsets, the training data is mapped into higher feature space, then Multi-SVM classifier is realized by using the composite kernel function through optimization procedure similar to conventional SVM classifier. The original training data set is partitioned by using some unsupervised learning methods such as clustering methods. In this study, three types of clustering method are considered such as Affinity propagation (AP), Hard C-Mean (HCM) and Iterative Self-Organizing Data Analysis Technique Algorithm (ISODATA). Experimental results on benchmark machine learning datasets show that the proposed method improves the classification performance efficiently.

Kernel Adatron Algorithm of Support Vector Machine for Function Approximation (함수근사를 위한 서포트 벡터 기계의 커널 애더트론 알고리즘)

  • Seok, Kyung-Ha;Hwang, Chang-Ha
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.6
    • /
    • pp.1867-1873
    • /
    • 2000
  • Function approximation from a set of input-output pairs has numerous applications in scientific and engineering areas. Support vector machine (SVM) is a new and very promising classification, regression and function approximation technique developed by Vapnik and his group at AT&TG Bell Laboratories. However, it has failed to establish itself as common machine learning tool. This is partly due to the fact that this is not easy to implement, and its standard implementation requires the use of optimization package for quadratic programming (QP). In this appear we present simple iterative Kernel Adatron (KA) algorithm for function approximation and compare it with standard SVM algorithm using QP.

  • PDF

Real-Time Prediction for Product Surface Roughness by Support Vector Regression (서포트벡터 회귀를 이용한 실시간 제품표면거칠기 예측)

  • Choi, Sujin;Lee, Dongju
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.44 no.3
    • /
    • pp.117-124
    • /
    • 2021
  • The development of IOT technology and artificial intelligence technology is promoting the smartization of manufacturing system. In this study, data extracted from acceleration sensor and current sensor were obtained through experiments in the cutting process of SKD11, which is widely used as a material for special mold steel, and the amount of tool wear and product surface roughness were measured. SVR (Support Vector Regression) is applied to predict the roughness of the product surface in real time using the obtained data. SVR, a machine learning technique, is widely used for linear and non-linear prediction using the concept of kernel. In particular, by applying GSVQR (Generalized Support Vector Quantile Regression), overestimation, underestimation, and neutral estimation of product surface roughness are performed and compared. Furthermore, surface roughness is predicted using the linear kernel and the RBF kernel. In terms of accuracy, the results of the RBF kernel are better than those of the linear kernel. Since it is difficult to predict the amount of tool wear in real time, the product surface roughness is predicted with acceleration and current data excluding the amount of tool wear. In terms of accuracy, the results of excluding the amount of tool wear were not significantly different from those including the amount of tool wear.

Fine-tuning SVM for Enhancing Speech/Music Classification (SVM의 미세조정을 통한 음성/음악 분류 성능향상)

  • Lim, Chung-Soo;Song, Ji-Hyun;Chang, Joon-Hyuk
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.2
    • /
    • pp.141-148
    • /
    • 2011
  • Support vector machines have been extensively studied and utilized in pattern recognition area for years. One of interesting applications of this technique is music/speech classification for a standardized codec such as 3GPP2 selectable mode vocoder. In this paper, we propose a novel approach that improves the speech/music classification of support vector machines. While conventional support vector machine optimization techniques apply during training phase, the proposed technique can be adopted in classification phase. In this regard, the proposed approach can be developed and employed in parallel with conventional optimizations, resulting in synergistic boost in classification performance. We first analyze the impact of kernel width parameter on the classifications made by support vector machines. From this analysis, we observe that we can fine-tune outputs of support vector machines with the kernel width parameter. To make the most of this capability, we identify strong correlation among neighboring input frames, and use this correlation information as a guide to adjusting kernel width parameter. According to the experimental results, the proposed algorithm is found to have potential for improving the performance of support vector machines.