• Title/Summary/Keyword: Kernel parameter

Search Result 119, Processing Time 0.029 seconds

Kernel Analysis of Weighted Linear Interpolation Based on Even-Odd Decomposition (짝수 홀수 분해 기반의 가중 선형 보간법을 위한 커널 분석)

  • Oh, Eun-ju;Yoo, Hoon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.22 no.11
    • /
    • pp.1455-1461
    • /
    • 2018
  • This paper presents a kernel analysis of weighted linear interpolation based on even-odd decomposition (EOD). The EOD method has advantages in that it provides low-complexity and improved image quality than the CCI method. However, since the kernel of EOD has not studied before and its analysis has not been addressed yet, this paper proposes the kernel function and its analysis. The kernel function is divided into odd and even terms. And then, the kernel is accomplished by summing the two terms. The proposed kernel is adjustable by a parameter. The parameter influences efficiency in the EOD based WLI process. Also, the kernel shapes are proposed by adjusting the parameter. In addition, the discussion with respect to the parameter is given to understand the parameter. A preliminary experiment on the kernel shape is presented to understand the adjustable parameter and corresponding kernel.

Kernel Poisson regression for mixed input variables

  • Shim, Jooyong
    • Journal of the Korean Data and Information Science Society
    • /
    • v.23 no.6
    • /
    • pp.1231-1239
    • /
    • 2012
  • An estimating procedure is introduced for kernel Poisson regression when the input variables consist of numerical and categorical variables, which is based on the penalized negative log-likelihood and the component-wise product of two different types of kernel functions. The proposed procedure provides the estimates of the mean function of the response variables, where the canonical parameter is linearly and/or nonlinearly related to the input variables. Experimental results are then presented which indicate the performance of the proposed kernel Poisson regression.

Kernel Machine for Poisson Regression

  • Hwang, Chang-Ha
    • Journal of the Korean Data and Information Science Society
    • /
    • v.18 no.3
    • /
    • pp.767-772
    • /
    • 2007
  • A kernel machine is proposed as an estimating procedure for the linear and nonlinear Poisson regression, which is based on the penalized negative log-likelihood. The proposed kernel machine provides the estimate of the mean function of the response variable, where the canonical parameter is related to the input vector in a nonlinear form. The generalized cross validation(GCV) function of MSE-type is introduced to determine hyperparameters which affect the performance of the machine. Experimental results are then presented which indicate the performance of the proposed machine.

  • PDF

Function Approximation Based on a Network with Kernel Functions of Bounds and Locality : an Approach of Non-Parametric Estimation

  • Kil, Rhee-M.
    • ETRI Journal
    • /
    • v.15 no.2
    • /
    • pp.35-51
    • /
    • 1993
  • This paper presents function approximation based on nonparametric estimation. As an estimation model of function approximation, a three layered network composed of input, hidden and output layers is considered. The input and output layers have linear activation units while the hidden layer has nonlinear activation units or kernel functions which have the characteristics of bounds and locality. Using this type of network, a many-to-one function is synthesized over the domain of the input space by a number of kernel functions. In this network, we have to estimate the necessary number of kernel functions as well as the parameters associated with kernel functions. For this purpose, a new method of parameter estimation in which linear learning rule is applied between hidden and output layers while nonlinear (piecewise-linear) learning rule is applied between input and hidden layers, is considered. The linear learning rule updates the output weights between hidden and output layers based on the Linear Minimization of Mean Square Error (LMMSE) sense in the space of kernel functions while the nonlinear learning rule updates the parameters of kernel functions based on the gradient of the actual output of network with respect to the parameters (especially, the shape) of kernel functions. This approach of parameter adaptation provides near optimal values of the parameters associated with kernel functions in the sense of minimizing mean square error. As a result, the suggested nonparametric estimation provides an efficient way of function approximation from the view point of the number of kernel functions as well as learning speed.

  • PDF

Selection of Kernels and its Parameters in Applying SVM to ASV (온라인 서명 검증을 위한 SVM의 커널 함수와 결정 계수 선택)

  • Fan, Yunhe;Woo, Young-Woon;Kim, Seong-Hoon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2015.10a
    • /
    • pp.1045-1046
    • /
    • 2015
  • When using the Support Vector Machine in the online signature verification, SVM kernel function should be chosen to use non-linear SVM and the constant parameters in the kernel functions should be adjusted to appropriate values to reduce the error rate of signature verification. Non-linear SVM which is built on a strong mathematical basis shows better performance of classification with the higher discrimination power. However, choosing the kernel function and adjusting constant parameter values depend on the heuristics of the problem domain. In the signature verification, this paper deals with the problems of selecting the correct kernel function and constant parameters' values, and shows the kernel function and coefficient parameter's values with the minimum error rate. As a result of this research, we expect the average error rate to be less than 1%.

  • PDF

Reducing Bias of the Minimum Hellinger Distance Estimator of a Location Parameter

  • Pak, Ro-Jin
    • Journal of the Korean Data and Information Science Society
    • /
    • v.17 no.1
    • /
    • pp.213-220
    • /
    • 2006
  • Since Beran (1977) developed the minimum Hellinger distance estimation, this method has been a popular topic in the field of robust estimation. In the process of defining a distance, a kernel density estimator has been widely used as a density estimator. In this article, however, we show that a combination of a kernel density estimator and an empirical density could result a smaller bias of the minimum Hellinger distance estimator than using just a kernel density estimator for a location parameter.

  • PDF

Kernel Regression with Correlation Coefficient Weighted Distance (상관계수 가중법을 이용한 커널회귀 방법)

  • Shin, Ho-Cheol;Park, Moon-Ghu;Lee, Jae-Yong;You, Skin
    • Proceedings of the KIEE Conference
    • /
    • 2006.10c
    • /
    • pp.588-590
    • /
    • 2006
  • Recently, many on-line approaches to instrument channel surveillance (drift monitoring and fault detection) have been reported worldwide. On-line monitoring (OLM) method evaluates instrument channel performance by assessing its consistency with other plant indications through parametric or non-parametric models. The heart of an OLM system is the model giving an estimate of the true process parameter value against individual measurements. This model gives process parameter estimate calculated as a function of other plant measurements which can be used to identify small sensor drifts that would require the sensor to be manually calibrated or replaced. This paper describes an improvement of auto-associative kernel regression by introducing a correlation coefficient weighting on kernel distances. The prediction performance of the developed method is compared with conventional auto-associative kernel regression.

  • PDF

New Fuzzy Inference System Using a Kernel-based Method

  • Kim, Jong-Cheol;Won, Sang-Chul;Suga, Yasuo
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.2393-2398
    • /
    • 2003
  • In this paper, we proposes a new fuzzy inference system for modeling nonlinear systems given input and output data. In the suggested fuzzy inference system, the number of fuzzy rules and parameter values of membership functions are automatically decided by using the kernel-based method. The kernel-based method individually performs linear transformation and kernel mapping. Linear transformation projects input space into linearly transformed input space. Kernel mapping projects linearly transformed input space into high dimensional feature space. The structure of the proposed fuzzy inference system is equal to a Takagi-Sugeno fuzzy model whose input variables are weighted linear combinations of input variables. In addition, the number of fuzzy rules can be reduced under the condition of optimizing a given criterion by adjusting linear transformation matrix and parameter values of kernel functions using the gradient descent method. Once a structure is selected, coefficients in consequent part are determined by the least square method. Simulated result illustrates the effectiveness of the proposed technique.

  • PDF

NEW ALGORITHM FOR THE DETERMINATION OF AN UNKNOWN PARAMETER IN PARABOLIC EQUATIONS

  • Yue, Sufang;Cui, Minggen
    • The Pure and Applied Mathematics
    • /
    • v.15 no.1
    • /
    • pp.19-34
    • /
    • 2008
  • A new algorithm for the solution of an inverse problem of determining unknown source parameter in a parabolic equation in reproducing kernel space is considered. Numerical experiments are presented to demonstrate the accuracy and the efficiency of the proposed algorithm.

  • PDF

Automation of Model Selection through Neural Networks Learning (신경 회로망 학습을 통한 모델 선택의 자동화)

  • 류재흥
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2004.10a
    • /
    • pp.313-316
    • /
    • 2004
  • Model selection is the process that sets up the regularization parameter in the support vector machine or regularization network by using the external methods such as general cross validation or L-curve criterion. This paper suggests that the regularization parameter can be obtained simultaneously within the learning process of neural networks without resort to separate selection methods. In this paper, extended kernel method is introduced. The relationship between regularization parameter and the bias term in the extended kernel is established. Experimental results show the effectiveness of the new model selection method.

  • PDF