• 제목/요약/키워드: support vector data description

검색결과 51건 처리시간 0.022초

Support Vector Quantile Regression Using Asymmetric e-Insensitive Loss Function

  • Shim, Joo-Yong;Seok, Kyung-Ha;Hwang, Chang-Ha;Cho, Dae-Hyeon
    • Communications for Statistical Applications and Methods
    • /
    • 제18권2호
    • /
    • pp.165-170
    • /
    • 2011
  • Support vector quantile regression(SVQR) is capable of providing a good description of the linear and nonlinear relationships among random variables. In this paper we propose a sparse SVQR to overcome a limitation of SVQR, nonsparsity. The asymmetric e-insensitive loss function is used to efficiently provide sparsity. The experimental results are presented to illustrate the performance of the proposed method by comparing it with nonsparse SVQR.

비정상 상태 탐지 문제를 위한 서포트벡터 학습 (Support Vector Learning for Abnormality Detection Problems)

  • 박주영;임채환
    • 한국지능시스템학회논문지
    • /
    • 제13권3호
    • /
    • pp.266-274
    • /
    • 2003
  • 본 논문은 비정상 상태 탐지 문제를 위한 점증적 서포트 벡터 학습을 다룬다. 비정상상태 탐지를 위한 서포트 벡터 학습 중 가장 잘 알려진 기법 중 하나는 SVDD(support vector data description)인데, 이 기법은 정상적인 데이터의 집합을 모든 가능한 비정상 개체로부터 구분하기 위하여 커널 특징공간(kernel feature space) 위에서 정의되는 볼(ball)을 이용하는 전략을 추구한다. 본 논문의 주된 관심사는 최적해와 점증적으로 주어지는 학습 데이터의 상관관계를 이용하는 방향으로 SVDD 기법을 수정하는 것이다. 본 논문에서는, 기존의 SVDD 기법을 상세히 복습한 후에, 라그랑제 쌍대 문제(Largrange dual problem)에 관한 관찰을 바탕으로 최적 해를 찾기 위한 점증적 풀이 기법을 제시한다. 그리고, 제시된 점증적 방법론의 적용 가능성이 예제를 통하여 보여진다.

One-Class Support Vector Learning and Linear Matrix Inequalities

  • Park, Jooyoung;Kim, Jinsung;Lee, Hansung;Park, Daihee
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제3권1호
    • /
    • pp.100-104
    • /
    • 2003
  • The SVDD(support vector data description) is one of the most well-known one-class support vector learning methods, in which one tries the strategy of utilizing balls defined on the kernel feature space in order to distinguish a set of normal data from all other possible abnormal objects. The major concern of this paper is to consider the problem of modifying the SVDD into the direction of utilizing ellipsoids instead of balls in order to enable better classification performance. After a brief review about the original SVDD method, this paper establishes a new method utilizing ellipsoids in feature space, and presents a solution in the form of SDP(semi-definite programming) which is an optimization problem based on linear matrix inequalities.

Sparse kernel classication using IRWLS procedure

  • Kim, Dae-Hak
    • Journal of the Korean Data and Information Science Society
    • /
    • 제20권4호
    • /
    • pp.749-755
    • /
    • 2009
  • Support vector classification (SVC) provides more complete description of the lin-ear and nonlinear relationships between input vectors and classifiers. In this paper. we propose the sparse kernel classifier to solve the optimization problem of classification with a modified hinge loss function and absolute loss function, which provides the efficient computation and the sparsity. We also introduce the generalized cross validation function to select the hyper-parameters which affects the classification performance of the proposed method. Experimental results are then presented which illustrate the performance of the proposed procedure for classification.

  • PDF

e-SVR using IRWLS Procedure

  • Shim, Joo-Yong
    • Journal of the Korean Data and Information Science Society
    • /
    • 제16권4호
    • /
    • pp.1087-1094
    • /
    • 2005
  • e-insensitive support vector regression(e-SVR) is capable of providing more complete description of the linear and nonlinear relationships among random variables. In this paper we propose an iterative reweighted least squares(IRWLS) procedure to solve the quadratic problem of e-SVR with a modified loss function. Furthermore, we introduce the generalized approximate cross validation function to select the hyperparameters which affect the performance of e-SVR. Experimental results are then presented which illustrate the performance of the IRWLS procedure for e-SVR.

  • PDF

SVC with Modified Hinge Loss Function

  • Lee, Sang-Bock
    • Journal of the Korean Data and Information Science Society
    • /
    • 제17권3호
    • /
    • pp.905-912
    • /
    • 2006
  • Support vector classification(SVC) provides more complete description of the linear and nonlinear relationships between input vectors and classifiers. In this paper we propose to solve the optimization problem of SVC with a modified hinge loss function, which enables to use an iterative reweighted least squares(IRWLS) procedure. We also introduce the approximate cross validation function to select the hyperparameters which affect the performance of SVC. Experimental results are then presented which illustrate the performance of the proposed procedure for classification.

  • PDF

Sparse Kernel Regression using IRWLS Procedure

  • Park, Hye-Jung
    • Journal of the Korean Data and Information Science Society
    • /
    • 제18권3호
    • /
    • pp.735-744
    • /
    • 2007
  • Support vector machine(SVM) is capable of providing a more complete description of the linear and nonlinear relationships among random variables. In this paper we propose a sparse kernel regression(SKR) to overcome a weak point of SVM, which is, the steep growth of the number of support vectors with increasing the number of training data. The iterative reweighted least squares(IRWLS) procedure is used to solve the optimal problem of SKR with a Laplacian prior. Furthermore, the generalized cross validation(GCV) function is introduced to select the hyper-parameters which affect the performance of SKR. Experimental results are then presented which illustrate the performance of the proposed procedure.

  • PDF

Support Vector Regression에 기반한 전력 수요 예측 (Electricity Demand Forecasting based on Support Vector Regression)

  • 이형로;신현정
    • 산업공학
    • /
    • 제24권4호
    • /
    • pp.351-361
    • /
    • 2011
  • Forecasting of electricity demand have difficulty in adapting to abrupt weather changes along with a radical shift in major regional and global climates. This has lead to increasing attention to research on the immediate and accurate forecasting model. Technically, this implies that a model requires only a few input variables all of which are easily obtainable, and its predictive performance is comparable with other competing models. To meet the ends, this paper presents an energy demand forecasting model that uses the variable selection or extraction methods of data mining to select only relevant input variables, and employs support vector regression method for accurate prediction. Also, it proposes a novel performance measure for time-series prediction, shift index, followed by description on preprocessing procedure. A comparative evaluation of the proposed method with other representative data mining models such as an auto-regression model, an artificial neural network model, an ordinary support vector regression model was carried out for obtaining the forecast of monthly electricity demand from 2000 to 2008 based on data provided by Korea Energy Economics Institute. Among the models tested, the proposed method was shown promising results than others.

다변량 관리도를 활용한 블로거 정서 변화 탐지 (Detection of the Change in Blogger Sentiment using Multivariate Control Charts)

  • 문정훈;이성임
    • 응용통계연구
    • /
    • 제26권6호
    • /
    • pp.903-913
    • /
    • 2013
  • 최근 소셜 네크워크 서비스의 발달로 인해 개인의 감정이나 의견을 표현하는 소셜 데이터들이 하루에도 수백만 건씩 생산되고 있다. 또한 소셜 데이터는 개인의 의견에 또 다른 생각을 더하는 등 정보의 생산과 소비가 누구나 가능해짐으로써 사회현상을 잘 반영해주는 도구로 성장하고 있다. 본 연구에서는 블로그에 올라온 부정적인 감성어들을 분석하여 블로거의 감성변화를 탐지하기 위해 다변량 관리도를 이용하고자 한다. 이를 위해 2008년 1월 1일부터 2009년 12월 31일 사이에 생성되었던 모든 블로그를 사용하였다. 품질 특성치가 다변량으로 주어지는 경우 호텔링의 $T^2$ 관리도가 널리 사용된다. 그러나 이 관리도는 품질 특성치들의 분포가 다변량 정규분포라는 가정을 하고 있어, 비정규 다변량 자료에 대한 관리도의 성능은 좋지 않다. 이에 본 논문에서는 Sun과 Tsung (2003)이 제안한 써포트 벡터머신에서 단일 집합 분류 기법 중 하나인 SVDD(support vector data description) 알고리즘과 이를 확장한 K-관리도를 소개하고, 실제 데이터 분석에 적용해 보았다.

Support vector quantile regression ensemble with bagging

  • Shim, Jooyong;Hwang, Changha
    • Journal of the Korean Data and Information Science Society
    • /
    • 제25권3호
    • /
    • pp.677-684
    • /
    • 2014
  • Support vector quantile regression (SVQR) is capable of providing more complete description of the linear and nonlinear relationships among random variables. To improve the estimation performance of SVQR we propose to use SVQR ensemble with bagging (bootstrap aggregating), in which SVQRs are trained independently using the training data sets sampled randomly via a bootstrap method. Then, they are aggregated to obtain the estimator of the quantile regression function using the penalized objective function composed of check functions. Experimental results are then presented, which illustrate the performance of SVQR ensemble with bagging.