• Title/Summary/Keyword: principal support vector machine

Search Result 82, Processing Time 0.027 seconds

A concise overview of principal support vector machines and its generalization

  • Jungmin Shin;Seung Jun Shin
    • Communications for Statistical Applications and Methods
    • /
    • v.31 no.2
    • /
    • pp.235-246
    • /
    • 2024
  • In high-dimensional data analysis, sufficient dimension reduction (SDR) has been considered as an attractive tool for reducing the dimensionality of predictors while preserving regression information. The principal support vector machine (PSVM) (Li et al., 2011) offers a unified approach for both linear and nonlinear SDR. This article comprehensively explores a variety of SDR methods based on the PSVM, which we call principal machines (PM) for SDR. The PM achieves SDR by solving a sequence of convex optimizations akin to popular supervised learning methods, such as the support vector machine, logistic regression, and quantile regression, to name a few. This makes the PM straightforward to handle and extend in both theoretical and computational aspects, as we will see throughout this article.

The Development of a Fault Diagnosis Model Based on Principal Component Analysis and Support Vector Machine for a Polystyrene Reactor (주성분 분석과 서포트 벡터 머신을 이용한 폴리스티렌 중합 반응기 이상 진단 모델 개발)

  • Jeong, Yeonsu;Lee, Chang Jun
    • Korean Chemical Engineering Research
    • /
    • v.60 no.2
    • /
    • pp.223-228
    • /
    • 2022
  • In chemical processes, unintended faults can make serious accidents. To tackle them, proper fault diagnosis models should be designed to identify the root cause of faults. To design a fault diagnosis model, a process and its data should be analyzed. However, most previous researches in the field of fault diagnosis just handle the data set of benchmark processes simulated on commercial programs. It indicates that it is really hard to get fresh data sets on real processes. In this study, real faulty conditions of an industrial polystyrene process are tested. In this process, a runaway reaction occurred and this caused a large loss since operators were late aware of the occurrence of this accident. To design a proper fault diagnosis model, we analyzed this process and a real accident data set. At first, a mode classification model based on support vector machine (SVM) was trained and principal component analysis (PCA) model for each mode was constructed under normal operation conditions. The results show that a proposed model can quickly diagnose the occurrence of a fault and they indicate that this model is able to reduce the potential loss.

SVM-Guided Biplot of Observations and Variables

  • Huh, Myung-Hoe
    • Communications for Statistical Applications and Methods
    • /
    • v.20 no.6
    • /
    • pp.491-498
    • /
    • 2013
  • We consider support vector machines(SVM) to predict Y with p numerical variables $X_1$, ${\ldots}$, $X_p$. This paper aims to build a biplot of p explanatory variables, in which the first dimension indicates the direction of SVM classification and/or regression fits. We use the geometric scheme of kernel principal component analysis adapted to map n observations on the two-dimensional projection plane of which one axis is determined by a SVM model a priori.

A Hybrid SVM-HMM Method for Handwritten Numeral Recognition

  • Kim, Eui-Chan;Kim, Sang-Woo
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.1032-1035
    • /
    • 2003
  • The field of handwriting recognition has been researched for many years. A hybrid classifier has been proven to be able to increase the recognition rate compared with a single classifier. In this paper, we combine support vector machine (SVM) and hidden Markov model (HMM) for offline handwritten numeral recognition. To improve the performance, we extract features adapted for each classifier and propose the modified SVM decision structure. The experimental results show that the proposed method can achieve improved recognition rate for handwritten numeral recognition.

  • PDF

PCA-SVM Based Vehicle Color Recognition (PCA-SVM 기법을 이용한 차량의 색상 인식)

  • Park, Sun-Mi;Kim, Ku-Jin
    • The KIPS Transactions:PartB
    • /
    • v.15B no.4
    • /
    • pp.285-292
    • /
    • 2008
  • Color histograms have been used as feature vectors to characterize the color features of given images, but they have a limitation in efficiency by generating high-dimensional feature vectors. In this paper, we present a method to reduce the dimension of the feature vectors by applying PCA (principal components analysis) to the color histogram of a given vehicle image. With SVM (support vector machine) method, the dimension-reduced feature vectors are used to recognize the colors of vehicles. After reducing the dimension of the feature vector by a factor of 32, the successful recognition rate is reduced only 1.42% compared to the case when we use original feature vectors. Moreover, the computation time for the color recognition is reduced by a factor of 31, so we could recognize the colors efficiently.

A Classification Method Using Data Reduction

  • Uhm, Daiho;Jun, Sung-Hae;Lee, Seung-Joo
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.12 no.1
    • /
    • pp.1-5
    • /
    • 2012
  • Data reduction has been used widely in data mining for convenient analysis. Principal component analysis (PCA) and factor analysis (FA) methods are popular techniques. The PCA and FA reduce the number of variables to avoid the curse of dimensionality. The curse of dimensionality is to increase the computing time exponentially in proportion to the number of variables. So, many methods have been published for dimension reduction. Also, data augmentation is another approach to analyze data efficiently. Support vector machine (SVM) algorithm is a representative technique for dimension augmentation. The SVM maps original data to a feature space with high dimension to get the optimal decision plane. Both data reduction and augmentation have been used to solve diverse problems in data analysis. In this paper, we compare the strengths and weaknesses of dimension reduction and augmentation for classification and propose a classification method using data reduction for classification. We will carry out experiments for comparative studies to verify the performance of this research.

A Study on Target Recognition with SAR Image using Support Vector Machine based on Principal Component Analysis (PCA 기반의 SVM을 이용한 SAR 이미지의 표적 인식에 관한 연구)

  • Jang, Hayoung;Lee, Yillbyung
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2011.11a
    • /
    • pp.434-437
    • /
    • 2011
  • 차세대 지능적 무기체계의 자동화를 목표로 SAR(Synthetic Aperture Radar) 영상 신호를 이용한 표적 인식률 향상을 위한 여러가지 방법들이 제안되어 왔다. 기존의 연구들은 SAR 영상의 고차원 특징을 그대로 사용했기 때문에 표적 인식의 성능저하가 있었다. 본 연구에서는 정보 획득 거리가 길고, 날씨에 제약이 없이 전천후 작전 운용이 가능하도록 레이더의 특징과 고해상도 영상을 결합한 SAR 이미지를 이용한 표적 인식률 향상 방법을 제안한다. 효과적인 표적 인식을 하기위해 고차원의 특징벡터를 저차원의 특징벡터로 축소하는 PCA(Principal Component Analysis)를 기반으로 하는 SVM(Support Vector Machine)을 사용한 표적 인식 기법을 사용하였고, PCA 기반의 SVM 분류기를 이용한 표적 인식이 SVM 만을 사용한 표적 인식보다 향상된 성능을 보인 것을 확인하였다.

Combining Radar and Rain Gauge Observations Utilizing Gaussian-Process-Based Regression and Support Vector Learning (가우시안 프로세스 기반 함수근사와 서포트 벡터 학습을 이용한 레이더 및 강우계 관측 데이터의 융합)

  • Yoo, Chul-Sang;Park, Joo-Young
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.3
    • /
    • pp.297-305
    • /
    • 2008
  • Recently, kernel methods have attracted great interests in the areas of pattern classification, function approximation, and anomaly detection. The role of the kernel is particularly important in the methods such as SVM(support vector machine) and KPCA(kernel principal component analysis), for it can generalize the conventional linear machines to be capable of efficiently handling nonlinearities. This paper considers the problem of combining radar and rain gauge observations utilizing the regression approach based on the kernel-based gaussian process and support vector learning. The data-assimilation results of the considered methods are reported for the radar and rain gauge observations collected over the region covering parts of Gangwon, Kyungbuk, and Chungbuk provinces of Korea, along with performance comparison.

Early warning of hazard for pipelines by acoustic recognition using principal component analysis and one-class support vector machines

  • Wan, Chunfeng;Mita, Akira
    • Smart Structures and Systems
    • /
    • v.6 no.4
    • /
    • pp.405-421
    • /
    • 2010
  • This paper proposes a method for early warning of hazard for pipelines. Many pipelines transport dangerous contents so that any damage incurred might lead to catastrophic consequences. However, most of these damages are usually a result of surrounding third-party activities, mainly the constructions. In order to prevent accidents and disasters, detection of potential hazards from third-party activities is indispensable. This paper focuses on recognizing the running of construction machines because they indicate the activity of the constructions. Acoustic information is applied for the recognition and a novel pipeline monitoring approach is proposed. Principal Component Analysis (PCA) is applied. The obtained Eigenvalues are regarded as the special signature and thus used for building feature vectors. One-class Support Vector Machine (SVM) is used for the classifier. The denoising ability of PCA can make it robust to noise interference, while the powerful classifying ability of SVM can provide good recognition results. Some related issues such as standardization are also studied and discussed. On-site experiments are conducted and results prove the effectiveness of the proposed early warning method. Thus the possible hazards can be prevented and the integrity of pipelines can be ensured.

LS-SVM for large data sets

  • Park, Hongrak;Hwang, Hyungtae;Kim, Byungju
    • Journal of the Korean Data and Information Science Society
    • /
    • v.27 no.2
    • /
    • pp.549-557
    • /
    • 2016
  • In this paper we propose multiclassification method for large data sets by ensembling least squares support vector machines (LS-SVM) with principal components instead of raw input vector. We use the revised one-vs-all method for multiclassification, which is one of voting scheme based on combining several binary classifications. The revised one-vs-all method is performed by using the hat matrix of LS-SVM ensemble, which is obtained by ensembling LS-SVMs trained using each random sample from the whole large training data. The leave-one-out cross validation (CV) function is used for the optimal values of hyper-parameters which affect the performance of multiclass LS-SVM ensemble. We present the generalized cross validation function to reduce computational burden of leave-one-out CV functions. Experimental results from real data sets are then obtained to illustrate the performance of the proposed multiclass LS-SVM ensemble.