• Title/Summary/Keyword: kernel smoothing method

Search Result 34, Processing Time 0.021 seconds

A FRAMEWORK TO UNDERSTAND THE ASYMPTOTIC PROPERTIES OF KRIGING AND SPLINES

  • Furrer Eva M.;Nychka Douglas W.
    • Journal of the Korean Statistical Society
    • /
    • v.36 no.1
    • /
    • pp.57-76
    • /
    • 2007
  • Kriging is a nonparametric regression method used in geostatistics for estimating curves and surfaces for spatial data. It may come as a surprise that the Kriging estimator, normally derived as the best linear unbiased estimator, is also the solution of a particular variational problem. Thus, Kriging estimators can also be interpreted as generalized smoothing splines where the roughness penalty is determined by the covariance function of a spatial process. We build off the early work by Silverman (1982, 1984) and the analysis by Cox (1983, 1984), Messer (1991), Messer and Goldstein (1993) and others and develop an equivalent kernel interpretation of geostatistical estimators. Given this connection we show how a given covariance function influences the bias and variance of the Kriging estimate as well as the mean squared prediction error. Some specific asymptotic results are given in one dimension for Matern covariances that have as their limit cubic smoothing splines.

Varying coefficient model with errors in variables (가변계수 측정오차 회귀모형)

  • Sohn, Insuk;Shim, Jooyong
    • Journal of the Korean Data and Information Science Society
    • /
    • v.28 no.5
    • /
    • pp.971-980
    • /
    • 2017
  • The varying coefficient regression model has gained lots of attention since it is capable to model dynamic changes of regression coefficients in many regression problems of science. In this paper we propose a varying coefficient regression model that effectively considers the errors on both input and response variables, which utilizes the kernel method in estimating the varying coefficient which is the unknown nonlinear function of smoothing variables. We provide a generalized cross validation method for choosing the hyper-parameters which affect the performance of the proposed model. The proposed method is evaluated through numerical studies.

A Bootstrap Test for Linear Relationship by Kernel Smoothing (희귀모형의 선형성에 대한 커널붓스트랩검정)

  • Baek, Jang-Sun;Kim, Min-Soo
    • Journal of the Korean Data and Information Science Society
    • /
    • v.9 no.2
    • /
    • pp.95-103
    • /
    • 1998
  • Azzalini and Bowman proposed the pseudo-likelihood ratio test for checking the linear relationship using kernel regression estimator when the error of the regression model follows the normal distribution. We modify their method with the bootstrap technique to construct a new test, and examine the power of our test through simulation. Our method can be applied to the case where the distribution of the error is not normal.

  • PDF

Nonparametric Estimation of Univariate Binary Regression Function

  • Jung, Shin Ae;Kang, Kee-Hoon
    • International Journal of Advanced Culture Technology
    • /
    • v.10 no.1
    • /
    • pp.236-241
    • /
    • 2022
  • We consider methods of estimating a binary regression function using a nonparametric kernel estimation when there is only one covariate. For this, the Nadaraya-Watson estimation method using single and double bandwidths are used. For choosing a proper smoothing amount, the cross-validation and plug-in methods are compared. In the real data analysis for case study, German credit data and heart disease data are used. We examine whether the nonparametric estimation for binary regression function is successful with the smoothing parameter using the above two approaches, and the performance is compared.

Neutron Signal Denoising using Edge Preserving Kernel Regression Filter (끝점 신호 보존을 위한 적응 커널 필터를 이용한 중성자 신호 잡음 제거)

  • Park, Moon-Ghu;Shin, Ho-Cheol;Lee, Yong-Kwan;You, Skin
    • Proceedings of the KIEE Conference
    • /
    • 2005.10b
    • /
    • pp.439-441
    • /
    • 2005
  • A kernel regression filter with adaptive bandwidth is developed and successfully applied to digital reactivity meter for neutron signal measurement in nuclear reactors. The purpose of this work is not only reduction of the measurement noise but also the edge preservation of the reactivity signal. The performance of the filtering algorithm is demonstrated comparing with well known smoothing methods of conventional low-pass and bilateral filters. The developed method gives satisfactory filtering performance and edge preservation capability.

  • PDF

Smoothing Kaplan-Meier estimate using monotone support vector regression (단조 서포트벡터기계를 이용한 카플란-마이어 생존함수의 평활)

  • Hwang, Changha;Shim, Jooyong
    • Journal of the Korean Data and Information Science Society
    • /
    • v.23 no.6
    • /
    • pp.1045-1054
    • /
    • 2012
  • Support vector machine is known to be the very useful statistical method in classification and nonlinear function estimation. In this paper we propose a monotone support vector regression (SVR) for the estimation of monotonically decreasing function. The proposed monotone SVR is applied to smooth the Kaplan-Meier estimate of survival function. Experimental results are then presented which indicate the performance of the proposed monotone SVR using survival functions obtained by exponential distribution.

Robust Nonparametric Regression Method using Rank Transformation

    • Communications for Statistical Applications and Methods
    • /
    • v.7 no.2
    • /
    • pp.574-574
    • /
    • 2000
  • Consider the problem of estimating regression function from a set of data which is contaminated by a long-tailed error distribution. The linear smoother is a kind of a local weighted average of response, so it is not robust against outliers. The kernel M-smoother and the lowess attain robustness against outliers by down-weighting outliers. However, the kernel M-smoother and the lowess requires the iteration for computing the robustness weights, and as Wang and Scott(1994) pointed out, the requirement of iteration is not a desirable property. In this article, we propose the robust nonparametic regression method which does not require the iteration. Robustness can be achieved not only by down-weighting outliers but also by transforming outliers. The rank transformation is a simple procedure where the data are replaced by their corresponding ranks. Iman and Conover(1979) showed the fact that the rank transformation is a robust and powerful procedure in the linear regression. In this paper, we show that we can also use the rank transformation to nonparametric regression to achieve the robustness.

Robust Nonparametric Regression Method using Rank Transformation

  • Park, Dongryeon
    • Communications for Statistical Applications and Methods
    • /
    • v.7 no.2
    • /
    • pp.575-583
    • /
    • 2000
  • Consider the problem of estimating regression function from a set of data which is contaminated by a long-tailed error distribution. The linear smoother is a kind of a local weighted average of response, so it is not robust against outliers. The kernel M-smoother and the lowess attain robustness against outliers by down-weighting outliers. However, the kernel M-smoother and the lowess requires the iteration for computing the robustness weights, and as Wang and Scott(1994) pointed out, the requirement of iteration is not a desirable property. In this article, we propose the robust nonparametic regression method which does not require the iteration. Robustness can be achieved not only by down-weighting outliers but also by transforming outliers. The rank transformation is a simple procedure where the data are replaced by their corresponding ranks. Iman and Conover(1979) showed the fact that the rank transformation is a robust and powerful procedure in the linear regression. In this paper, we show that we can also use the rank transformation to nonparametric regression to achieve the robustness.

  • PDF

Barrier Option Pricing with Model Averaging Methods under Local Volatility Models

  • Kim, Nam-Hyoung;Jung, Kyu-Hwan;Lee, Jae-Wook;Han, Gyu-Sik
    • Industrial Engineering and Management Systems
    • /
    • v.10 no.1
    • /
    • pp.84-94
    • /
    • 2011
  • In this paper, we propose a method to provide the distribution of option price under local volatility model when market-provided implied volatility data are given. The local volatility model is one of the most widely used smile-consistent models. In local volatility model, the volatility is a deterministic function of the random stock price. Before estimating local volatility surface (LVS), we need to estimate implied volatility surfaces (IVS) from market data. To do this we use local polynomial smoothing method. Then we apply the Dupire formula to estimate the resulting LVS. However, the result is dependent on the bandwidth of kernel function employed in local polynomial smoothing method and to solve this problem, the proposed method in this paper makes use of model averaging approach by means of bandwidth priors, and then produces a robust local volatility surface estimation with a confidence interval. After constructing LVS, we price barrier option with the LVS estimation through Monte Carlo simulation. To show the merits of our proposed method, we have conducted experiments on simulated and market data which are relevant to KOSPI200 call equity linked warrants (ELWs.) We could show by these experiments that the results of the proposed method are quite reasonable and acceptable when compared to the previous works.

On Adaptation to Sparse Design in Bivariate Local Linear Regression

  • Hall, Peter;Seifert, Burkhardt;Turlach, Berwin A.
    • Journal of the Korean Statistical Society
    • /
    • v.30 no.2
    • /
    • pp.231-246
    • /
    • 2001
  • Local linear smoothing enjoys several excellent theoretical and numerical properties, an in a range of applications is the method most frequently chosen for fitting curves to noisy data. Nevertheless, it suffers numerical problems in places where the distribution of design points(often called predictors, or explanatory variables) is spares. In the case of univariate design, several remedies have been proposed for overcoming this problem, of which one involves adding additional ″pseudo″ design points in places where the orignal design points were too widely separated. This approach is particularly well suited to treating sparse bivariate design problem, and in fact attractive, elegant geometric analogues of unvariate imputation and interpolation rules are appropriate for that case. In the present paper we introduce and develop pseudo dta rules for bivariate design, and apply them to real data.

  • PDF