• 제목/요약/키워드: kernel trick

검색결과 22건 처리시간 0.018초

Claims Reserving via Kernel Machine

  • Kim, Mal-Suk;Park, He-Jung;Hwang, Chang-Ha;Shim, Joo-Yong
    • Journal of the Korean Data and Information Science Society
    • /
    • 제19권4호
    • /
    • pp.1419-1427
    • /
    • 2008
  • This paper shows the kernel Poisson regression which can be applied in the claims reserving, where the row effect is assumed to be a nonlinear function of the row index. The paper concentrates on the chain-ladder technique, within the framework of the chain-ladder linear model. It is shown that the proposed method can provide better reserve estimates than the Poisson model. The cross validation function is introduced to choose optimal hyper-parameters in the procedure. Experimental results are then presented which indicate the performance of the proposed model.

  • PDF

A selective review of nonlinear sufficient dimension reduction

  • Sehun Jang;Jun Song
    • Communications for Statistical Applications and Methods
    • /
    • 제31권2호
    • /
    • pp.247-262
    • /
    • 2024
  • In this paper, we explore nonlinear sufficient dimension reduction (SDR) methods, with a primary focus on establishing a foundational framework that integrates various nonlinear SDR methods. We illustrate the generalized sliced inverse regression (GSIR) and the generalized sliced average variance estimation (GSAVE) which are fitted by the framework. Further, we delve into nonlinear extensions of inverse moments through the kernel trick, specifically examining the kernel sliced inverse regression (KSIR) and kernel canonical correlation analysis (KCCA), and explore their relationships within the established framework. We also briefly explain the nonlinear SDR for functional data. In addition, we present practical aspects such as algorithmic implementations. This paper concludes with remarks on the dimensionality problem of the target function class.

Semisupervised support vector quantile regression

  • Seok, Kyungha
    • Journal of the Korean Data and Information Science Society
    • /
    • 제26권2호
    • /
    • pp.517-524
    • /
    • 2015
  • Unlabeled examples are easier and less expensive to be obtained than labeled examples. In this paper semisupervised approach is used to utilize such examples in an effort to enhance the predictive performance of nonlinear quantile regression problems. We propose a semisupervised quantile regression method named semisupervised support vector quantile regression, which is based on support vector machine. A generalized approximate cross validation method is used to choose the hyper-parameters that affect the performance of estimator. The experimental results confirm the successful performance of the proposed S2SVQR.

Kernel Poisson Regression for Longitudinal Data

  • Shim, Joo-Yong;Seok, Kyung-Ha
    • Journal of the Korean Data and Information Science Society
    • /
    • 제19권4호
    • /
    • pp.1353-1360
    • /
    • 2008
  • An estimating procedure is introduced for the nonlinear mixed-effect Poisson regression, for longitudinal study, where data from different subjects are independent whereas data from same subject are correlated. The proposed procedure provides the estimates of the mean function of the response variables, where the canonical parameter is related to the input vector in a nonlinear form. The generalized cross validation function is introduced to choose optimal hyper-parameters in the procedure. Experimental results are then presented, which indicate the performance of the proposed estimating procedure.

  • PDF

회전기계의 결함진단을 위한 비선형 특징 추출 방법의 연구 (Study of Nonlinear Feature Extraction for Faults Diagnosis of Rotating Machinery)

  • ;양보석
    • 한국소음진동공학회:학술대회논문집
    • /
    • 한국소음진동공학회 2005년도 추계학술대회논문집
    • /
    • pp.127-130
    • /
    • 2005
  • There are many methods in feature extraction have been developed. Recently, principal components analysis (PCA) and independent components analysis (ICA) is introduced for doing feature extraction. PCA and ICA linearly transform the original input into new uncorrelated and independent features space respectively In this paper, the feasibility of using nonlinear feature extraction will be studied. This method will employ the PCA and ICA procedure and adopt the kernel trick to nonlinearly map the data into a feature space. The goal of this study is to seek effectively useful feature for faults classification.

  • PDF

잡음 민감성이 향상된 주성분 분석 기법의 비선형 변형 (A Non-linear Variant of Improved Robust Fuzzy PCA)

  • 허경용;서진석;이임건
    • 한국컴퓨터정보학회논문지
    • /
    • 제16권4호
    • /
    • pp.15-22
    • /
    • 2011
  • 주성분 분석(PCA)은 데이터의 차원을 줄이면서 최대의 데이터 변이를 보존하는 기법으로 차원 축소나 특징 추출을 위해 널리 사용되고 있다. 하지만 PCA는 잡음에 민감하며 가우스 분포에 대하여만 유효하다는 단점이 있다. 잡음 민감성의 개선을 위해 다양한 방법이 제시되었고 그 중 퍼지 소속도를 이용한 반복적 최적화 기법인 RF-PCA2가 다른 방법에 비해 우수한 성능을 보였다. 하지만 RF-PCA2는 가우스 분포에만 사용할 수 있는 선형 알고리듬이라는 한계가 있다. 이 논문에서는 RF-PCA2와 커널 주성분 분석(kernel PCA, K-PCA)을 결합하여 가우스 분포 이외의 분포들도 다룰 수 있는 비선형 알고리듬인 improved robust kernel fuzzy PCA (RKF-PCA2)를 제안한다. RKF-PCA2는 RF-PCA2 알고리듬의 잡음 강건성과K-PCA의비선형성을 통해 기존알고리듬에 비해 잡음민감성이 적으며 가우스분포 한계를 효과적으로 극복할 수 있다. 이러한 사실은 실험 결과를 통해 확인할 수 있다.

클래스가 부가된 커널 주성분분석을 이용한 비선형 특징추출 (Nonlinear Feature Extraction using Class-augmented Kernel PCA)

  • 박명수;오상록
    • 전자공학회논문지SC
    • /
    • 제48권5호
    • /
    • pp.7-12
    • /
    • 2011
  • 본 논문에서는 자료패턴을 분류하기에 적합한 특징을 추출하는 방법인, 클래스가 부가된 커널 주성분분석(class-augmented kernel principal component analysis)를 새로이 제안하였다. 특징추출에 널리 이용되는 부분공간 기법 중, 최근 제안된 클래스가 부가된 주성분분석(class-augmented principal component analysis)은 패턴 분류를 위한 특징을 추출하기 위해 이용되는 선형분류분석(linear discriminant analysis)등에 비해 정확한 특징을 계산상의 문제 없이 추출할 수 있는 기법이다. 그러나, 추출되는 특징은 입력의 선형조합으로 제한되어 자료에 따라 적절한 특징을 추출하기 어려운 경우가 발생한다. 이를 해결하기 위하여 클래스가 부가된 주성분분석에 커널 트릭을 적용하여 비선형 특징을 추출할 수 있는 새로운 부분공간 기법으로 확장하고, 실험을 통하여 성능을 평가하였다.

A transductive least squares support vector machine with the difference convex algorithm

  • Shim, Jooyong;Seok, Kyungha
    • Journal of the Korean Data and Information Science Society
    • /
    • 제25권2호
    • /
    • pp.455-464
    • /
    • 2014
  • Unlabeled examples are easier and less expensive to obtain than labeled examples. Semisupervised approaches are used to utilize such examples in an eort to boost the predictive performance. This paper proposes a novel semisupervised classication method named transductive least squares support vector machine (TLS-SVM), which is based on the least squares support vector machine. The proposed method utilizes the dierence convex algorithm to derive nonconvex minimization solutions for the TLS-SVM. A generalized cross validation method is also developed to choose the hyperparameters that aect the performance of the TLS-SVM. The experimental results conrm the successful performance of the proposed TLS-SVM.

Homogeneous and Non-homogeneous Polynomial Based Eigenspaces to Extract the Features on Facial Images

  • Muntasa, Arif
    • Journal of Information Processing Systems
    • /
    • 제12권4호
    • /
    • pp.591-611
    • /
    • 2016
  • High dimensional space is the biggest problem when classification process is carried out, because it takes longer time for computation, so that the costs involved are also expensive. In this research, the facial space generated from homogeneous and non-homogeneous polynomial was proposed to extract the facial image features. The homogeneous and non-homogeneous polynomial-based eigenspaces are the second opinion of the feature extraction of an appearance method to solve non-linear features. The kernel trick has been used to complete the matrix computation on the homogeneous and non-homogeneous polynomial. The weight and projection of the new feature space of the proposed method have been evaluated by using the three face image databases, i.e., the YALE, the ORL, and the UoB. The experimental results have produced the highest recognition rate 94.44%, 97.5%, and 94% for the YALE, ORL, and UoB, respectively. The results explain that the proposed method has produced the higher recognition than the other methods, such as the Eigenface, Fisherface, Laplacianfaces, and O-Laplacianfaces.

The use of support vector machines in semi-supervised classification

  • Bae, Hyunjoo;Kim, Hyungwoo;Shin, Seung Jun
    • Communications for Statistical Applications and Methods
    • /
    • 제29권2호
    • /
    • pp.193-202
    • /
    • 2022
  • Semi-supervised learning has gained significant attention in recent applications. In this article, we provide a selective overview of popular semi-supervised methods and then propose a simple but effective algorithm for semi-supervised classification using support vector machines (SVM), one of the most popular binary classifiers in a machine learning community. The idea is simple as follows. First, we apply the dimension reduction to the unlabeled observations and cluster them to assign labels on the reduced space. SVM is then employed to the combined set of labeled and unlabeled observations to construct a classification rule. The use of SVM enables us to extend it to the nonlinear counterpart via kernel trick. Our numerical experiments under various scenarios demonstrate that the proposed method is promising in semi-supervised classification.