• 제목/요약/키워드: kernel method

검색결과 996건 처리시간 0.024초

Function Approximation Based on a Network with Kernel Functions of Bounds and Locality : an Approach of Non-Parametric Estimation

  • Kil, Rhee-M.
    • ETRI Journal
    • /
    • 제15권2호
    • /
    • pp.35-51
    • /
    • 1993
  • This paper presents function approximation based on nonparametric estimation. As an estimation model of function approximation, a three layered network composed of input, hidden and output layers is considered. The input and output layers have linear activation units while the hidden layer has nonlinear activation units or kernel functions which have the characteristics of bounds and locality. Using this type of network, a many-to-one function is synthesized over the domain of the input space by a number of kernel functions. In this network, we have to estimate the necessary number of kernel functions as well as the parameters associated with kernel functions. For this purpose, a new method of parameter estimation in which linear learning rule is applied between hidden and output layers while nonlinear (piecewise-linear) learning rule is applied between input and hidden layers, is considered. The linear learning rule updates the output weights between hidden and output layers based on the Linear Minimization of Mean Square Error (LMMSE) sense in the space of kernel functions while the nonlinear learning rule updates the parameters of kernel functions based on the gradient of the actual output of network with respect to the parameters (especially, the shape) of kernel functions. This approach of parameter adaptation provides near optimal values of the parameters associated with kernel functions in the sense of minimizing mean square error. As a result, the suggested nonparametric estimation provides an efficient way of function approximation from the view point of the number of kernel functions as well as learning speed.

  • PDF

영상 분할을 위한 퍼지 커널 K-nearest neighbor 알고리즘 (Fuzzy Kernel K-Nearest Neighbor Algorithm for Image Segmentation)

  • 최병인;이정훈
    • 한국지능시스템학회논문지
    • /
    • 제15권7호
    • /
    • pp.828-833
    • /
    • 2005
  • 커널 기법은 데이터를 high dimension 상의 속성 공간으로 mapping함으로써 복잡한 분포를 가지는 데이터에 대하여 기존의 선형 분류 알고리즘들의 성능을 향상시킬 수 있다r4]. 본 논문에서는 기존의 유클리디안 거리측정방법 대신에 커널 함수에 의한 속성 공간의 거리측정방법을 fuzzy K-nearest neighbor(fuzzy K-NN) 알고리즘에 적용한 fuzzy kernel K-nearest neighbor(fuzzy kernel K-NN) 알고리즘을 제안한다. 제시한 알고리즘은 데이터에 대한 적절한 커널 함수의 선택으로 기존 알고리즘의 성능을 향상시킬 수 있다. 제시한 알고리즘의 타당성을 보이기 위하여 여러 데이터 집합에 대한 실험결과와 실제 영상의 분할 결과를 보일 것이다.

커널 이완 절차에 의한 커널 공간의 저밀도 표현 학습 (Spare Representation Learning of Kernel Space Using the Kernel Relaxation Procedure)

  • 류재홍;정종철
    • 한국지능시스템학회논문지
    • /
    • 제11권9호
    • /
    • pp.817-821
    • /
    • 2001
  • 본 논문은 분류 문제의 훈련 패턴으로부터 형성되는 커널 공간의 저밀도 표현을 가능하게 하는 커널 방법에 대한 새로운 학습방법론을 제안한다. 선형 판별 함수에 대한 기존의 학습법 중에서 이완 절차가 SVM(Support Vector Machine) 분류기와 동등하게 선형분리 가능 패턴분류 문제의 최대 마진 분리 초평면을 얻을 수 있다. 기존의 이완 절차는 지원 백터에 대한 필요 조건을 만족한다. 본 논문에서는 학습 중 지원 벡터를 확인하기 위한 충분 조건을 제시한다. 순차적 학습을 위하여 기존의 SVM을 확장하고 커널 판별함수를 정의한 후에 체계적인 학습방법을 제시한다. 실험 결과는 새 방법이 기존의 방법과 동등하거나 우수한 분류 성능을 갖고있음을 보여준다.

  • PDF

커널 이완절차에 의한 커널 공간의 저밀도 표현 학습 (Sparse Representation Learning of Kernel Space Using the Kernel Relaxation Procedure)

  • 류재홍;정종철
    • 한국지능시스템학회:학술대회논문집
    • /
    • 한국퍼지및지능시스템학회 2001년도 추계학술대회 학술발표 논문집
    • /
    • pp.60-64
    • /
    • 2001
  • In this paper, a new learning methodology for Kernel Methods is suggested that results in a sparse representation of kernel space from the training patterns for classification problems. Among the traditional algorithms of linear discriminant function(perceptron, relaxation, LMS(least mean squared), pseudoinverse), this paper shows that the relaxation procedure can obtain the maximum margin separating hyperplane of linearly separable pattern classification problem as SVM(Support Vector Machine) classifier does. The original relaxation method gives only the necessary condition of SV patterns. We suggest the sufficient condition to identify the SV patterns in the learning epochs. Experiment results show the new methods have the higher or equivalent performance compared to the conventional approach.

  • PDF

광회전 커널 오퍼레이션을 이용하는 방향성 정보 처리 (Directional Information Processing Using Optical Rotating Kernel Operations)

  • Yim Kul Lee
    • 전자공학회논문지B
    • /
    • 제30B권2호
    • /
    • pp.78-86
    • /
    • 1993
  • A nonlinear method for directional information processing is introduced, along with an application of directional feature enhancement. In this method, an input is convolved with a 2-D ong, norrow kernel, which is rotated through 360 degree, continuously or discretely in a large number of steps. An output is given by some function of the convolution results. Linear features that are aligned with the kernel are enhanced, otherwise, removed or suppressed. The method presented is insensitive to variation in the dimension of linear features to be processed and preserves a good enhancement capability even for an image characterized by low contrast and spatially varying brightness in noisy backgroung. Effects of the kernel legnth and width on the performance are discussed. A possible hybrid optical-electronic implementation is also discussed.

  • PDF

AN ELIGIBLE PRIMAL-DUAL INTERIOR-POINT METHOD FOR LINEAR OPTIMIZATION

  • Cho, Gyeong-Mi;Lee, Yong-Hoon
    • East Asian mathematical journal
    • /
    • 제29권3호
    • /
    • pp.279-292
    • /
    • 2013
  • It is well known that each kernel function defines a primal-dual interior-point method(IPM). Most of polynomial-time interior-point algorithms for linear optimization(LO) are based on the logarithmic kernel function([2, 11]). In this paper we define a new eligible kernel function and propose a new search direction and proximity function based on this function for LO problems. We show that the new algorithm has ${\mathcal{O}}((log\;p){\sqrt{n}}\;log\;n\;log\;{\frac{n}{\epsilon}})$ and ${\mathcal{O}}((q\;log\;p)^{\frac{3}{2}}{\sqrt{n}}\;log\;{\frac{n}{\epsilon}})$ iteration bound for large- and small-update methods, respectively. These are currently the best known complexity results.

Estimating Variance Function with Kernel Machine

  • Kim, Jong-Tae;Hwang, Chang-Ha;Park, Hye-Jung;Shim, Joo-Yong
    • Communications for Statistical Applications and Methods
    • /
    • 제16권2호
    • /
    • pp.383-388
    • /
    • 2009
  • In this paper we propose a variance function estimation method based on kernel trick for replicated data or data consisted of sample variances. Newton-Raphson method is used to obtain associated parameter vector. Furthermore, the generalized approximate cross validation function is introduced to select the hyper-parameters which affect the performance of the proposed variance function estimation method. Experimental results are then presented which illustrate the performance of the proposed procedure.

Estimating multiplicative competitive interaction model using kernel machine technique

  • Shim, Joo-Yong;Kim, Mal-Suk;Park, Hye-Jung
    • Journal of the Korean Data and Information Science Society
    • /
    • 제23권4호
    • /
    • pp.825-832
    • /
    • 2012
  • We propose a novel way of forecasting the market shares of several brands simultaneously in a multiplicative competitive interaction model, which uses kernel regression technique incorporated with kernel machine technique applied in support vector machines and other machine learning techniques. Traditionally, the estimations of the market share attraction model are performed via a maximum likelihood estimation procedure under the assumption that the data are drawn from a normal distribution. The proposed method is shown to be a good candidate for forecasting method of the market share attraction model when normal distribution is not assumed. We apply the proposed method to forecast the market shares of 4 Korean car brands simultaneously and represent better performances than maximum likelihood estimation procedure.

A Note on Nonparametric Density Estimation for the Deconvolution Problem

  • Lee, Sung-Ho
    • Communications for Statistical Applications and Methods
    • /
    • 제15권6호
    • /
    • pp.939-946
    • /
    • 2008
  • In this paper the support vector method is presented for the probability density function estimation when the sample observations are contaminated with random noise. The performance of the procedure is compared to kernel density estimates by the simulation study.

Numerical Solution For Fredholm Integral Equation With Hilbert Kernel

  • Abdou, Mohamed Abdella Ahmed;Hendi, Fathea Ahmed
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • 제9권1호
    • /
    • pp.111-123
    • /
    • 2005
  • Here, the Fredholm integral equation with Hilbert kernel is solved numerically, using two different methods. Also the error, in each case, is estimated.

  • PDF