• 제목/요약/키워드: Kernel parameter

검색결과 119건 처리시간 0.029초

짝수 홀수 분해 기반의 가중 선형 보간법을 위한 커널 분석 (Kernel Analysis of Weighted Linear Interpolation Based on Even-Odd Decomposition)

  • 오은주;유훈
    • 한국정보통신학회논문지
    • /
    • 제22권11호
    • /
    • pp.1455-1461
    • /
    • 2018
  • 본 논문은 짝수 홀수 분해법에 기초한 가중된 선형 보간법(weighted Linear Interpolation; WLI) 커널 분석을 제안한다. 짝수 홀수 분해법은 기존에 알려진 CCI 보간법보다 복잡도가 낮고 개선된 화질을 제공해준다는 점에서 장점을 가지고 있다. 하지만 기존에는 EOD에 대한 커널이 부재했을 뿐 더러, 그에 대한 분석이 이루어지 않았기에 본 논문은 EOD에 대한 커널 식을 제공한다. EOD에 의해 짝 홀수로 나누어진 벡터에 대한 커널 식을 제공하고 최종적으로 두 벡터의 합인 EOD 커널식을 제공한다. 최종적으로 유도된 EOD의 커널 식은 매개변수 ${\omega}$에 의해 정의된다. ${\omega}$에 의해 정의된 커널 식이 WLI이며, 여기서 ${\omega}$는 보간 과정에 있어 성능을 좌우하는 역할로 사용된다. 또한 매개변수의 변화에 다른 커널의 형태의 변화에 관한 것도 제시한다. 또한, 매개변수에 대한 이해와 해당되는 커널의 형태 변화를 이해하기 위해서 실험과 토론을 제시한다.

Kernel Poisson regression for mixed input variables

  • Shim, Jooyong
    • Journal of the Korean Data and Information Science Society
    • /
    • 제23권6호
    • /
    • pp.1231-1239
    • /
    • 2012
  • An estimating procedure is introduced for kernel Poisson regression when the input variables consist of numerical and categorical variables, which is based on the penalized negative log-likelihood and the component-wise product of two different types of kernel functions. The proposed procedure provides the estimates of the mean function of the response variables, where the canonical parameter is linearly and/or nonlinearly related to the input variables. Experimental results are then presented which indicate the performance of the proposed kernel Poisson regression.

Kernel Machine for Poisson Regression

  • Hwang, Chang-Ha
    • Journal of the Korean Data and Information Science Society
    • /
    • 제18권3호
    • /
    • pp.767-772
    • /
    • 2007
  • A kernel machine is proposed as an estimating procedure for the linear and nonlinear Poisson regression, which is based on the penalized negative log-likelihood. The proposed kernel machine provides the estimate of the mean function of the response variable, where the canonical parameter is related to the input vector in a nonlinear form. The generalized cross validation(GCV) function of MSE-type is introduced to determine hyperparameters which affect the performance of the machine. Experimental results are then presented which indicate the performance of the proposed machine.

  • PDF

Function Approximation Based on a Network with Kernel Functions of Bounds and Locality : an Approach of Non-Parametric Estimation

  • Kil, Rhee-M.
    • ETRI Journal
    • /
    • 제15권2호
    • /
    • pp.35-51
    • /
    • 1993
  • This paper presents function approximation based on nonparametric estimation. As an estimation model of function approximation, a three layered network composed of input, hidden and output layers is considered. The input and output layers have linear activation units while the hidden layer has nonlinear activation units or kernel functions which have the characteristics of bounds and locality. Using this type of network, a many-to-one function is synthesized over the domain of the input space by a number of kernel functions. In this network, we have to estimate the necessary number of kernel functions as well as the parameters associated with kernel functions. For this purpose, a new method of parameter estimation in which linear learning rule is applied between hidden and output layers while nonlinear (piecewise-linear) learning rule is applied between input and hidden layers, is considered. The linear learning rule updates the output weights between hidden and output layers based on the Linear Minimization of Mean Square Error (LMMSE) sense in the space of kernel functions while the nonlinear learning rule updates the parameters of kernel functions based on the gradient of the actual output of network with respect to the parameters (especially, the shape) of kernel functions. This approach of parameter adaptation provides near optimal values of the parameters associated with kernel functions in the sense of minimizing mean square error. As a result, the suggested nonparametric estimation provides an efficient way of function approximation from the view point of the number of kernel functions as well as learning speed.

  • PDF

온라인 서명 검증을 위한 SVM의 커널 함수와 결정 계수 선택 (Selection of Kernels and its Parameters in Applying SVM to ASV)

  • 판윈허;우영운;김성훈
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국정보통신학회 2015년도 추계학술대회
    • /
    • pp.1045-1046
    • /
    • 2015
  • When using the Support Vector Machine in the online signature verification, SVM kernel function should be chosen to use non-linear SVM and the constant parameters in the kernel functions should be adjusted to appropriate values to reduce the error rate of signature verification. Non-linear SVM which is built on a strong mathematical basis shows better performance of classification with the higher discrimination power. However, choosing the kernel function and adjusting constant parameter values depend on the heuristics of the problem domain. In the signature verification, this paper deals with the problems of selecting the correct kernel function and constant parameters' values, and shows the kernel function and coefficient parameter's values with the minimum error rate. As a result of this research, we expect the average error rate to be less than 1%.

  • PDF

Reducing Bias of the Minimum Hellinger Distance Estimator of a Location Parameter

  • Pak, Ro-Jin
    • Journal of the Korean Data and Information Science Society
    • /
    • 제17권1호
    • /
    • pp.213-220
    • /
    • 2006
  • Since Beran (1977) developed the minimum Hellinger distance estimation, this method has been a popular topic in the field of robust estimation. In the process of defining a distance, a kernel density estimator has been widely used as a density estimator. In this article, however, we show that a combination of a kernel density estimator and an empirical density could result a smaller bias of the minimum Hellinger distance estimator than using just a kernel density estimator for a location parameter.

  • PDF

상관계수 가중법을 이용한 커널회귀 방법 (Kernel Regression with Correlation Coefficient Weighted Distance)

  • 신호철;박문규;이재용;류석진
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2006년 학술대회 논문집 정보 및 제어부문
    • /
    • pp.588-590
    • /
    • 2006
  • Recently, many on-line approaches to instrument channel surveillance (drift monitoring and fault detection) have been reported worldwide. On-line monitoring (OLM) method evaluates instrument channel performance by assessing its consistency with other plant indications through parametric or non-parametric models. The heart of an OLM system is the model giving an estimate of the true process parameter value against individual measurements. This model gives process parameter estimate calculated as a function of other plant measurements which can be used to identify small sensor drifts that would require the sensor to be manually calibrated or replaced. This paper describes an improvement of auto-associative kernel regression by introducing a correlation coefficient weighting on kernel distances. The prediction performance of the developed method is compared with conventional auto-associative kernel regression.

  • PDF

New Fuzzy Inference System Using a Kernel-based Method

  • Kim, Jong-Cheol;Won, Sang-Chul;Suga, Yasuo
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2003년도 ICCAS
    • /
    • pp.2393-2398
    • /
    • 2003
  • In this paper, we proposes a new fuzzy inference system for modeling nonlinear systems given input and output data. In the suggested fuzzy inference system, the number of fuzzy rules and parameter values of membership functions are automatically decided by using the kernel-based method. The kernel-based method individually performs linear transformation and kernel mapping. Linear transformation projects input space into linearly transformed input space. Kernel mapping projects linearly transformed input space into high dimensional feature space. The structure of the proposed fuzzy inference system is equal to a Takagi-Sugeno fuzzy model whose input variables are weighted linear combinations of input variables. In addition, the number of fuzzy rules can be reduced under the condition of optimizing a given criterion by adjusting linear transformation matrix and parameter values of kernel functions using the gradient descent method. Once a structure is selected, coefficients in consequent part are determined by the least square method. Simulated result illustrates the effectiveness of the proposed technique.

  • PDF

NEW ALGORITHM FOR THE DETERMINATION OF AN UNKNOWN PARAMETER IN PARABOLIC EQUATIONS

  • Yue, Sufang;Cui, Minggen
    • 한국수학교육학회지시리즈B:순수및응용수학
    • /
    • 제15권1호
    • /
    • pp.19-34
    • /
    • 2008
  • A new algorithm for the solution of an inverse problem of determining unknown source parameter in a parabolic equation in reproducing kernel space is considered. Numerical experiments are presented to demonstrate the accuracy and the efficiency of the proposed algorithm.

  • PDF

신경 회로망 학습을 통한 모델 선택의 자동화 (Automation of Model Selection through Neural Networks Learning)

  • 류재흥
    • 한국지능시스템학회:학술대회논문집
    • /
    • 한국퍼지및지능시스템학회 2004년도 추계학술대회 학술발표 논문집 제14권 제2호
    • /
    • pp.313-316
    • /
    • 2004
  • Model selection is the process that sets up the regularization parameter in the support vector machine or regularization network by using the external methods such as general cross validation or L-curve criterion. This paper suggests that the regularization parameter can be obtained simultaneously within the learning process of neural networks without resort to separate selection methods. In this paper, extended kernel method is introduced. The relationship between regularization parameter and the bias term in the extended kernel is established. Experimental results show the effectiveness of the new model selection method.

  • PDF