• Title/Summary/Keyword: kernel trick

Search Result 22, Processing Time 0.025 seconds

Claims Reserving via Kernel Machine

  • Kim, Mal-Suk;Park, He-Jung;Hwang, Chang-Ha;Shim, Joo-Yong
    • Journal of the Korean Data and Information Science Society
    • /
    • v.19 no.4
    • /
    • pp.1419-1427
    • /
    • 2008
  • This paper shows the kernel Poisson regression which can be applied in the claims reserving, where the row effect is assumed to be a nonlinear function of the row index. The paper concentrates on the chain-ladder technique, within the framework of the chain-ladder linear model. It is shown that the proposed method can provide better reserve estimates than the Poisson model. The cross validation function is introduced to choose optimal hyper-parameters in the procedure. Experimental results are then presented which indicate the performance of the proposed model.

  • PDF

A selective review of nonlinear sufficient dimension reduction

  • Sehun Jang;Jun Song
    • Communications for Statistical Applications and Methods
    • /
    • v.31 no.2
    • /
    • pp.247-262
    • /
    • 2024
  • In this paper, we explore nonlinear sufficient dimension reduction (SDR) methods, with a primary focus on establishing a foundational framework that integrates various nonlinear SDR methods. We illustrate the generalized sliced inverse regression (GSIR) and the generalized sliced average variance estimation (GSAVE) which are fitted by the framework. Further, we delve into nonlinear extensions of inverse moments through the kernel trick, specifically examining the kernel sliced inverse regression (KSIR) and kernel canonical correlation analysis (KCCA), and explore their relationships within the established framework. We also briefly explain the nonlinear SDR for functional data. In addition, we present practical aspects such as algorithmic implementations. This paper concludes with remarks on the dimensionality problem of the target function class.

Semisupervised support vector quantile regression

  • Seok, Kyungha
    • Journal of the Korean Data and Information Science Society
    • /
    • v.26 no.2
    • /
    • pp.517-524
    • /
    • 2015
  • Unlabeled examples are easier and less expensive to be obtained than labeled examples. In this paper semisupervised approach is used to utilize such examples in an effort to enhance the predictive performance of nonlinear quantile regression problems. We propose a semisupervised quantile regression method named semisupervised support vector quantile regression, which is based on support vector machine. A generalized approximate cross validation method is used to choose the hyper-parameters that affect the performance of estimator. The experimental results confirm the successful performance of the proposed S2SVQR.

Kernel Poisson Regression for Longitudinal Data

  • Shim, Joo-Yong;Seok, Kyung-Ha
    • Journal of the Korean Data and Information Science Society
    • /
    • v.19 no.4
    • /
    • pp.1353-1360
    • /
    • 2008
  • An estimating procedure is introduced for the nonlinear mixed-effect Poisson regression, for longitudinal study, where data from different subjects are independent whereas data from same subject are correlated. The proposed procedure provides the estimates of the mean function of the response variables, where the canonical parameter is related to the input vector in a nonlinear form. The generalized cross validation function is introduced to choose optimal hyper-parameters in the procedure. Experimental results are then presented, which indicate the performance of the proposed estimating procedure.

  • PDF

Study of Nonlinear Feature Extraction for Faults Diagnosis of Rotating Machinery (회전기계의 결함진단을 위한 비선형 특징 추출 방법의 연구)

  • Widodo, Achmad;Yang, Bo-Suk
    • Proceedings of the Korean Society for Noise and Vibration Engineering Conference
    • /
    • 2005.11a
    • /
    • pp.127-130
    • /
    • 2005
  • There are many methods in feature extraction have been developed. Recently, principal components analysis (PCA) and independent components analysis (ICA) is introduced for doing feature extraction. PCA and ICA linearly transform the original input into new uncorrelated and independent features space respectively In this paper, the feasibility of using nonlinear feature extraction will be studied. This method will employ the PCA and ICA procedure and adopt the kernel trick to nonlinearly map the data into a feature space. The goal of this study is to seek effectively useful feature for faults classification.

  • PDF

A Non-linear Variant of Improved Robust Fuzzy PCA (잡음 민감성이 향상된 주성분 분석 기법의 비선형 변형)

  • Heo, Gyeong-Yong;Seo, Jin-Seok;Lee, Im-Geun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.4
    • /
    • pp.15-22
    • /
    • 2011
  • Principal component analysis (PCA) is a well-known method for dimensionality reduction and feature extraction while maintaining most of the variation in data. Although PCA has been applied in many areas successfully, it is sensitive to outliers and only valid for Gaussian distributions. Several variants of PCA have been proposed to resolve noise sensitivity and, among the variants, improved robust fuzzy PCA (RF-PCA2) demonstrated promising results. RF-PCA, however, is still a linear algorithm that cannot accommodate non-Gaussian distributions. In this paper, a non-linear algorithm that combines RF-PCA2 and kernel PCA (K-PCA), called improved robust kernel fuzzy PCA (RKF-PCA2), is introduced. The kernel methods make it to accommodate non-Gaussian distributions. RKF-PCA2 inherits noise robustness from RF-PCA2 and non-linearity from K-PCA. RKF-PCA2 outperforms previous methods in handling non-Gaussian distributions in a noise robust way. Experimental results also support this.

Nonlinear Feature Extraction using Class-augmented Kernel PCA (클래스가 부가된 커널 주성분분석을 이용한 비선형 특징추출)

  • Park, Myoung-Soo;Oh, Sang-Rok
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.48 no.5
    • /
    • pp.7-12
    • /
    • 2011
  • In this papwer, we propose a new feature extraction method, named as Class-augmented Kernel Principal Component Analysis (CA-KPCA), which can extract nonlinear features for classification. Among the subspace method that was being widely used for feature extraction, Class-augmented Principal Component Analysis (CA-PCA) is a recently one that can extract features for a accurate classification without computational difficulties of other methods such as Linear Discriminant Analysis (LDA). However, the features extracted by CA-PCA is still restricted to be in a linear subspace of the original data space, which limites the use of this method for various problems requiring nonlinear features. To resolve this limitation, we apply a kernel trick to develop a new version of CA-PCA to extract nonlinear features, and evaluate its performance by experiments using data sets in the UCI Machine Learning Repository.

A transductive least squares support vector machine with the difference convex algorithm

  • Shim, Jooyong;Seok, Kyungha
    • Journal of the Korean Data and Information Science Society
    • /
    • v.25 no.2
    • /
    • pp.455-464
    • /
    • 2014
  • Unlabeled examples are easier and less expensive to obtain than labeled examples. Semisupervised approaches are used to utilize such examples in an eort to boost the predictive performance. This paper proposes a novel semisupervised classication method named transductive least squares support vector machine (TLS-SVM), which is based on the least squares support vector machine. The proposed method utilizes the dierence convex algorithm to derive nonconvex minimization solutions for the TLS-SVM. A generalized cross validation method is also developed to choose the hyperparameters that aect the performance of the TLS-SVM. The experimental results conrm the successful performance of the proposed TLS-SVM.

Homogeneous and Non-homogeneous Polynomial Based Eigenspaces to Extract the Features on Facial Images

  • Muntasa, Arif
    • Journal of Information Processing Systems
    • /
    • v.12 no.4
    • /
    • pp.591-611
    • /
    • 2016
  • High dimensional space is the biggest problem when classification process is carried out, because it takes longer time for computation, so that the costs involved are also expensive. In this research, the facial space generated from homogeneous and non-homogeneous polynomial was proposed to extract the facial image features. The homogeneous and non-homogeneous polynomial-based eigenspaces are the second opinion of the feature extraction of an appearance method to solve non-linear features. The kernel trick has been used to complete the matrix computation on the homogeneous and non-homogeneous polynomial. The weight and projection of the new feature space of the proposed method have been evaluated by using the three face image databases, i.e., the YALE, the ORL, and the UoB. The experimental results have produced the highest recognition rate 94.44%, 97.5%, and 94% for the YALE, ORL, and UoB, respectively. The results explain that the proposed method has produced the higher recognition than the other methods, such as the Eigenface, Fisherface, Laplacianfaces, and O-Laplacianfaces.

The use of support vector machines in semi-supervised classification

  • Bae, Hyunjoo;Kim, Hyungwoo;Shin, Seung Jun
    • Communications for Statistical Applications and Methods
    • /
    • v.29 no.2
    • /
    • pp.193-202
    • /
    • 2022
  • Semi-supervised learning has gained significant attention in recent applications. In this article, we provide a selective overview of popular semi-supervised methods and then propose a simple but effective algorithm for semi-supervised classification using support vector machines (SVM), one of the most popular binary classifiers in a machine learning community. The idea is simple as follows. First, we apply the dimension reduction to the unlabeled observations and cluster them to assign labels on the reduced space. SVM is then employed to the combined set of labeled and unlabeled observations to construct a classification rule. The use of SVM enables us to extend it to the nonlinear counterpart via kernel trick. Our numerical experiments under various scenarios demonstrate that the proposed method is promising in semi-supervised classification.