• Title/Summary/Keyword: regression function

Search Result 2,118, Processing Time 0.032 seconds

Kernel Machine for Poisson Regression

  • Hwang, Chang-Ha
    • Journal of the Korean Data and Information Science Society
    • /
    • v.18 no.3
    • /
    • pp.767-772
    • /
    • 2007
  • A kernel machine is proposed as an estimating procedure for the linear and nonlinear Poisson regression, which is based on the penalized negative log-likelihood. The proposed kernel machine provides the estimate of the mean function of the response variable, where the canonical parameter is related to the input vector in a nonlinear form. The generalized cross validation(GCV) function of MSE-type is introduced to determine hyperparameters which affect the performance of the machine. Experimental results are then presented which indicate the performance of the proposed machine.

  • PDF

Variable selection in censored kernel regression

  • Choi, Kook-Lyeol;Shim, Jooyong
    • Journal of the Korean Data and Information Science Society
    • /
    • v.24 no.1
    • /
    • pp.201-209
    • /
    • 2013
  • For censored regression, it is often the case that some input variables are not important, while some input variables are more important than others. We propose a novel algorithm for selecting such important input variables for censored kernel regression, which is based on the penalized regression with the weighted quadratic loss function for the censored data, where the weight is computed from the empirical survival function of the censoring variable. We employ the weighted version of ANOVA decomposition kernels to choose optimal subset of important input variables. Experimental results are then presented which indicate the performance of the proposed variable selection method.

$L^1$ Bandwidth Selection in Kernel Regression Function Estimation

  • Jhun, Myong-Shic
    • Journal of the Korean Statistical Society
    • /
    • v.17 no.1
    • /
    • pp.1-8
    • /
    • 1988
  • Kernel estimates of an unknown regression function are studied. Bandwidth selection rule minimizing integrated absolute error loss function is considered. Under some reasonable assumptions, it is shown that the optimal bandwidth is unique and can be computed by using bisection algorithm. Adaptive bandwidth selection rule is proposed.

  • PDF

Optimization of Regression model Using Genetic Algorithm and Desirability Function (유전 알고리즘과 호감도 함수를 이용한 회귀모델의 최적화)

  • 안홍락;이세헌
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 1997.10a
    • /
    • pp.450-453
    • /
    • 1997
  • There are many studies about optimization using genetic algorithm and desirability function. It's very important to find the optimal value of something like response surface or regression model. In this study I ind~cate the problem using the old type desirability function, and suggest the new type desirabhty functton that can fix the problem better, and simulate the model. Then I'll suggest the form of desirability function to find the optimum value of response surfaces which are made by mean and standard deviation using genetic algorithm and new type desirability function.

  • PDF

Support Vector Quantile Regression with Weighted Quadratic Loss Function

  • Shim, Joo-Yong;Hwang, Chang-Ha
    • Communications for Statistical Applications and Methods
    • /
    • v.17 no.2
    • /
    • pp.183-191
    • /
    • 2010
  • Support vector quantile regression(SVQR) is capable of providing more complete description of the linear and nonlinear relationships among random variables. In this paper we propose an iterative reweighted least squares(IRWLS) procedure to solve the problem of SVQR with a weighted quadratic loss function. Furthermore, we introduce the generalized approximate cross validation function to select the hyperparameters which affect the performance of SVQR. Experimental results are then presented which illustrate the performance of the IRWLS procedure for SVQR.

Support Vector Machine for Interval Regression

  • Hong Dug Hun;Hwang Changha
    • Proceedings of the Korean Statistical Society Conference
    • /
    • 2004.11a
    • /
    • pp.67-72
    • /
    • 2004
  • Support vector machine (SVM) has been very successful in pattern recognition and function estimation problems for crisp data. This paper proposes a new method to evaluate interval linear and nonlinear regression models combining the possibility and necessity estimation formulation with the principle of SVM. For data sets with crisp inputs and interval outputs, the possibility and necessity models have been recently utilized, which are based on quadratic programming approach giving more diverse spread coefficients than a linear programming one. SVM also uses quadratic programming approach whose another advantage in interval regression analysis is to be able to integrate both the property of central tendency in least squares and the possibilistic property In fuzzy regression. However this is not a computationally expensive way. SVM allows us to perform interval nonlinear regression analysis by constructing an interval linear regression function in a high dimensional feature space. In particular, SVM is a very attractive approach to model nonlinear interval data. The proposed algorithm here is model-free method in the sense that we do not have to assume the underlying model function for interval nonlinear regression model with crisp inputs and interval output. Experimental results are then presented which indicate the performance of this algorithm.

  • PDF

Test for Discontinuities in Nonparametric Regression

  • Park, Dong-Ryeon
    • Communications for Statistical Applications and Methods
    • /
    • v.15 no.5
    • /
    • pp.709-717
    • /
    • 2008
  • The difference of two one-sided kernel estimators is usually used to detect the location of the discontinuity points of regression function. The large absolute value of the statistic imply discontinuity of regression function, so we may use the difference of two one-sided kernel estimators as the test statistic for testing null hypothesis of a smooth regression function. The problem is, however, we only know the asymptotic distribution of the test statistic under $H_0$ and we hardly expect the good performance of test if we rely solely on the asymptotic distribution for determining the critical points. In this paper, we show that if we adjust the bias of test statistic properly, the asymptotic rules hold for even small sample size situation.

Variable selection in L1 penalized censored regression

  • Hwang, Chang-Ha;Kim, Mal-Suk;Shi, Joo-Yong
    • Journal of the Korean Data and Information Science Society
    • /
    • v.22 no.5
    • /
    • pp.951-959
    • /
    • 2011
  • The proposed method is based on a penalized censored regression model with L1-penalty. We use the iteratively reweighted least squares procedure to solve L1 penalized log likelihood function of censored regression model. It provide the efficient computation of regression parameters including variable selection and leads to the generalized cross validation function for the model selection. Numerical results are then presented to indicate the performance of the proposed method.

Gaussian Process Regression and Its Application to Mathematical Finance (가우시언 과정의 회귀분석과 금융수학의 응용)

  • Lim, Hyuncheul
    • Journal for History of Mathematics
    • /
    • v.35 no.1
    • /
    • pp.1-18
    • /
    • 2022
  • This paper presents a statistical machine learning method that generates the implied volatility surface under the rareness of the market data. We apply the practitioner's Black-Scholes model and Gaussian process regression method to construct a Bayesian inference system with observed volatilities as a prior information and estimate the posterior distribution of the unobserved volatilities. The variance instead of the volatility is the target of the estimation, and the radial basis function is applied to the mean and kernel function of the Gaussian process regression. We present two types of Gaussian process regression methods and empirically analyze them.