• Title/Summary/Keyword: support vector data description

Search Result 51, Processing Time 0.033 seconds

Support Vector Quantile Regression Using Asymmetric e-Insensitive Loss Function

  • Shim, Joo-Yong;Seok, Kyung-Ha;Hwang, Chang-Ha;Cho, Dae-Hyeon
    • Communications for Statistical Applications and Methods
    • /
    • v.18 no.2
    • /
    • pp.165-170
    • /
    • 2011
  • Support vector quantile regression(SVQR) is capable of providing a good description of the linear and nonlinear relationships among random variables. In this paper we propose a sparse SVQR to overcome a limitation of SVQR, nonsparsity. The asymmetric e-insensitive loss function is used to efficiently provide sparsity. The experimental results are presented to illustrate the performance of the proposed method by comparing it with nonsparse SVQR.

Support Vector Learning for Abnormality Detection Problems (비정상 상태 탐지 문제를 위한 서포트벡터 학습)

  • Park, Joo-Young;Leem, Chae-Hwan
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.13 no.3
    • /
    • pp.266-274
    • /
    • 2003
  • This paper considers an incremental support vector learning for the abnormality detection problems. One of the most well-known support vector learning methods for abnormality detection is the so-called SVDD(support vector data description), which seeks the strategy of utilizing balls defined on the kernel feature space in order to distinguish a set of normal data from all other possible abnormal objects. The major concern of this paper is to modify the SVDD into the direction of utilizing the relation between the optimal solution and incrementally given training data. After a thorough review about the original SVDD method, this paper establishes an incremental method for finding the optimal solution based on certain observations on the Lagrange dual problems. The applicability of the presented incremental method is illustrated via a design example.

One-Class Support Vector Learning and Linear Matrix Inequalities

  • Park, Jooyoung;Kim, Jinsung;Lee, Hansung;Park, Daihee
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.3 no.1
    • /
    • pp.100-104
    • /
    • 2003
  • The SVDD(support vector data description) is one of the most well-known one-class support vector learning methods, in which one tries the strategy of utilizing balls defined on the kernel feature space in order to distinguish a set of normal data from all other possible abnormal objects. The major concern of this paper is to consider the problem of modifying the SVDD into the direction of utilizing ellipsoids instead of balls in order to enable better classification performance. After a brief review about the original SVDD method, this paper establishes a new method utilizing ellipsoids in feature space, and presents a solution in the form of SDP(semi-definite programming) which is an optimization problem based on linear matrix inequalities.

Sparse kernel classication using IRWLS procedure

  • Kim, Dae-Hak
    • Journal of the Korean Data and Information Science Society
    • /
    • v.20 no.4
    • /
    • pp.749-755
    • /
    • 2009
  • Support vector classification (SVC) provides more complete description of the lin-ear and nonlinear relationships between input vectors and classifiers. In this paper. we propose the sparse kernel classifier to solve the optimization problem of classification with a modified hinge loss function and absolute loss function, which provides the efficient computation and the sparsity. We also introduce the generalized cross validation function to select the hyper-parameters which affects the classification performance of the proposed method. Experimental results are then presented which illustrate the performance of the proposed procedure for classification.

  • PDF

e-SVR using IRWLS Procedure

  • Shim, Joo-Yong
    • Journal of the Korean Data and Information Science Society
    • /
    • v.16 no.4
    • /
    • pp.1087-1094
    • /
    • 2005
  • e-insensitive support vector regression(e-SVR) is capable of providing more complete description of the linear and nonlinear relationships among random variables. In this paper we propose an iterative reweighted least squares(IRWLS) procedure to solve the quadratic problem of e-SVR with a modified loss function. Furthermore, we introduce the generalized approximate cross validation function to select the hyperparameters which affect the performance of e-SVR. Experimental results are then presented which illustrate the performance of the IRWLS procedure for e-SVR.

  • PDF

SVC with Modified Hinge Loss Function

  • Lee, Sang-Bock
    • Journal of the Korean Data and Information Science Society
    • /
    • v.17 no.3
    • /
    • pp.905-912
    • /
    • 2006
  • Support vector classification(SVC) provides more complete description of the linear and nonlinear relationships between input vectors and classifiers. In this paper we propose to solve the optimization problem of SVC with a modified hinge loss function, which enables to use an iterative reweighted least squares(IRWLS) procedure. We also introduce the approximate cross validation function to select the hyperparameters which affect the performance of SVC. Experimental results are then presented which illustrate the performance of the proposed procedure for classification.

  • PDF

Sparse Kernel Regression using IRWLS Procedure

  • Park, Hye-Jung
    • Journal of the Korean Data and Information Science Society
    • /
    • v.18 no.3
    • /
    • pp.735-744
    • /
    • 2007
  • Support vector machine(SVM) is capable of providing a more complete description of the linear and nonlinear relationships among random variables. In this paper we propose a sparse kernel regression(SKR) to overcome a weak point of SVM, which is, the steep growth of the number of support vectors with increasing the number of training data. The iterative reweighted least squares(IRWLS) procedure is used to solve the optimal problem of SKR with a Laplacian prior. Furthermore, the generalized cross validation(GCV) function is introduced to select the hyper-parameters which affect the performance of SKR. Experimental results are then presented which illustrate the performance of the proposed procedure.

  • PDF

Electricity Demand Forecasting based on Support Vector Regression (Support Vector Regression에 기반한 전력 수요 예측)

  • Lee, Hyoung-Ro;Shin, Hyun-Jung
    • IE interfaces
    • /
    • v.24 no.4
    • /
    • pp.351-361
    • /
    • 2011
  • Forecasting of electricity demand have difficulty in adapting to abrupt weather changes along with a radical shift in major regional and global climates. This has lead to increasing attention to research on the immediate and accurate forecasting model. Technically, this implies that a model requires only a few input variables all of which are easily obtainable, and its predictive performance is comparable with other competing models. To meet the ends, this paper presents an energy demand forecasting model that uses the variable selection or extraction methods of data mining to select only relevant input variables, and employs support vector regression method for accurate prediction. Also, it proposes a novel performance measure for time-series prediction, shift index, followed by description on preprocessing procedure. A comparative evaluation of the proposed method with other representative data mining models such as an auto-regression model, an artificial neural network model, an ordinary support vector regression model was carried out for obtaining the forecast of monthly electricity demand from 2000 to 2008 based on data provided by Korea Energy Economics Institute. Among the models tested, the proposed method was shown promising results than others.

Detection of the Change in Blogger Sentiment using Multivariate Control Charts (다변량 관리도를 활용한 블로거 정서 변화 탐지)

  • Moon, Jeounghoon;Lee, Sungim
    • The Korean Journal of Applied Statistics
    • /
    • v.26 no.6
    • /
    • pp.903-913
    • /
    • 2013
  • Social network services generate a considerable amount of social data every day on personal feelings or thoughts. This social data provides changing patterns of information production and consumption but are also a tool that reflects social phenomenon. We analyze negative emotional words from daily blogs to detect the change in blooger sentiment using multivariate control charts. We used the all the blogs produced between 1 January 2008 and 31 December 2009. Hotelling's T-square control chart control chart is commonly used to monitor multivariate quality characteristics; however, it assumes that quality characteristics follow multivariate normal distribution. The performance of a multivariate control chart is affected by this assumption; consequently, we introduce the support vector data description and its extension (K-control chart) suggested by Sun and Tsung (2003) and they are applied to detect the chage in blogger sentiment.

Support vector quantile regression ensemble with bagging

  • Shim, Jooyong;Hwang, Changha
    • Journal of the Korean Data and Information Science Society
    • /
    • v.25 no.3
    • /
    • pp.677-684
    • /
    • 2014
  • Support vector quantile regression (SVQR) is capable of providing more complete description of the linear and nonlinear relationships among random variables. To improve the estimation performance of SVQR we propose to use SVQR ensemble with bagging (bootstrap aggregating), in which SVQRs are trained independently using the training data sets sampled randomly via a bootstrap method. Then, they are aggregated to obtain the estimator of the quantile regression function using the penalized objective function composed of check functions. Experimental results are then presented, which illustrate the performance of SVQR ensemble with bagging.