• Title/Summary/Keyword: local linear smoothing

Search Result 17, Processing Time 0.02 seconds

A SMOOTHING NEWTON METHOD FOR NCP BASED ON A NEW CLASS OF SMOOTHING FUNCTIONS

  • Zhu, Jianguang;Hao, Binbin
    • Journal of applied mathematics & informatics
    • /
    • v.32 no.1_2
    • /
    • pp.211-225
    • /
    • 2014
  • A new class of smoothing functions is introduced in this paper, which includes some important smoothing complementarity functions as its special cases. Based on this new smoothing function, we proposed a smoothing Newton method. Our algorithm needs only to solve one linear system of equations. Without requiring the nonemptyness and boundedness of the solution set, the proposed algorithm is proved to be globally convergent. Numerical results indicate that the smoothing Newton method based on the new proposed class of smoothing functions with ${\theta}{\in}(0,1)$ seems to have better numerical performance than those based on some other important smoothing functions, which also demonstrate that our algorithm is promising.

Fuzzy Local Linear Regression Analysis

  • Hong, Dug-Hun;Kim, Jong-Tae
    • Journal of the Korean Data and Information Science Society
    • /
    • v.18 no.2
    • /
    • pp.515-524
    • /
    • 2007
  • This paper deals with local linear estimation of fuzzy regression models based on Diamond(1998) as a new class of non-linear fuzzy regression. The purpose of this paper is to introduce a use of smoothing in testing for lack of fit of parametric fuzzy regression models.

  • PDF

A Local Linear Kernel Estimator for Sparse Multinomial Data

  • Baek, Jangsun
    • Journal of the Korean Statistical Society
    • /
    • v.27 no.4
    • /
    • pp.515-529
    • /
    • 1998
  • Burman (1987) and Hall and Titterington (1987) studied kernel smoothing for sparse multinomial data in detail. Both of their estimators for cell probabilities are sparse asymptotic consistent under some restrictive conditions on the true cell probabilities. Dong and Simonoff (1994) adopted boundary kernels to relieve the restrictive conditions. We propose a local linear kernel estimator which is popular in nonparametric regression to estimate cell probabilities. No boundary adjustment is necessary for this estimator since it adapts automatically to estimation at the boundaries. It is shown that our estimator attains the optimal rate of convergence in mean sum of squared error under sparseness. Some simulation results and a real data application are presented to see the performance of the estimator.

  • PDF

ON MARGINAL INTEGRATION METHOD IN NONPARAMETRIC REGRESSION

  • Lee, Young-Kyung
    • Journal of the Korean Statistical Society
    • /
    • v.33 no.4
    • /
    • pp.435-447
    • /
    • 2004
  • In additive nonparametric regression, Linton and Nielsen (1995) showed that the marginal integration when applied to the local linear smoother produces a rate-optimal estimator of each univariate component function for the case where the dimension of the predictor is two. In this paper we give new formulas for the bias and variance of the marginal integration regression estimators which are valid for boundary areas as well as fixed interior points, and show the local linear marginal integration estimator is in fact rate-optimal when the dimension of the predictor is less than or equal to four. We extend the results to the case of the local polynomial smoother, too.

Selection of bandwidth for local linear composite quantile regression smoothing (국소 선형 복합 분위수 회귀에서의 평활계수 선택)

  • Jhun, Myoungshic;Kang, Jongkyeong;Bang, Sungwan
    • The Korean Journal of Applied Statistics
    • /
    • v.30 no.5
    • /
    • pp.733-745
    • /
    • 2017
  • Local composite quantile regression is a useful non-parametric regression method widely used for its high efficiency. Data smoothing methods using kernel are typically used in the estimation process with performances that rely largely on the smoothing parameter rather than the kernel. However, $L_2$-norm is generally used as criterion to estimate the performance of the regression function. In addition, many studies have been conducted on the selection of smoothing parameters that minimize mean square error (MSE) or mean integrated square error (MISE). In this paper, we explored the optimality of selecting smoothing parameters that determine the performance of non-parametric regression models using local linear composite quantile regression. As evaluation criteria for the choice of smoothing parameter, we used mean absolute error (MAE) and mean integrated absolute error (MIAE), which have not been researched extensively due to mathematical difficulties. We proved the uniqueness of the optimal smoothing parameter based on MAE and MIAE. Furthermore, we compared the optimal smoothing parameter based on the proposed criteria (MAE and MIAE) with existing criteria (MSE and MISE). In this process, the properties of the proposed method were investigated through simulation studies in various situations.

A nonlinear transformation methods for GMM to improve over-smoothing effect

  • Chae, Yi Geun
    • Journal of Advanced Marine Engineering and Technology
    • /
    • v.38 no.2
    • /
    • pp.182-187
    • /
    • 2014
  • We propose nonlinear GMM-based transformation functions in an attempt to deal with the over-smoothing effects of linear transformation for voice processing. The proposed methods adopt RBF networks as a local transformation function to overcome the drawbacks of global nonlinear transformation functions. In order to obtain high-quality modifications of speech signals, our voice conversion is implemented using the Harmonic plus Noise Model analysis/synthesis framework. Experimental results are reported on the English corpus, MOCHA-TIMIT.

On Adaptation to Sparse Design in Bivariate Local Linear Regression

  • Hall, Peter;Seifert, Burkhardt;Turlach, Berwin A.
    • Journal of the Korean Statistical Society
    • /
    • v.30 no.2
    • /
    • pp.231-246
    • /
    • 2001
  • Local linear smoothing enjoys several excellent theoretical and numerical properties, an in a range of applications is the method most frequently chosen for fitting curves to noisy data. Nevertheless, it suffers numerical problems in places where the distribution of design points(often called predictors, or explanatory variables) is spares. In the case of univariate design, several remedies have been proposed for overcoming this problem, of which one involves adding additional ″pseudo″ design points in places where the orignal design points were too widely separated. This approach is particularly well suited to treating sparse bivariate design problem, and in fact attractive, elegant geometric analogues of unvariate imputation and interpolation rules are appropriate for that case. In the present paper we introduce and develop pseudo dta rules for bivariate design, and apply them to real data.

  • PDF

Efficiency of Aggregate Data in Non-linear Regression

  • Huh, Jib
    • Communications for Statistical Applications and Methods
    • /
    • v.8 no.2
    • /
    • pp.327-336
    • /
    • 2001
  • This work concerns estimating a regression function, which is not linear, using aggregate data. In much of the empirical research, data are aggregated for various reasons before statistical analysis. In a traditional parametric approach, a linear estimation of the non-linear function with aggregate data can result in unstable estimators of the parameters. More serious consequence is the bias in the estimation of the non-linear function. The approach we employ is the kernel regression smoothing. We describe the conditions when the aggregate data can be used to estimate the regression function efficiently. Numerical examples will illustrate our findings.

  • PDF

ANALYSIS OF SMOOTHING NEWTON-TYPE METHOD FOR NONLINEAR COMPLEMENTARITY PROBLEMS

  • Zheng, Xiuyun
    • Journal of applied mathematics & informatics
    • /
    • v.29 no.5_6
    • /
    • pp.1511-1523
    • /
    • 2011
  • In this paper, we consider the smoothing Newton method for the nonlinear complementarity problems with $P_0$-function. The proposed algorithm is based on a new smoothing function and it needs only to solve one linear system of equations and perform one line search per iteration. Under the condition that the solution set is nonempty and bounded, the proposed algorithm is proved to be convergent globally. Furthermore, the local superlinearly(quadratic) convergence is established under suitable conditions. Preliminary numerical results show that the proposed algorithm is very promising.

Improvement of Boundary Bias in Nonparametric Regression via Twicing Technique

  • Jo, Jae-Keun
    • Communications for Statistical Applications and Methods
    • /
    • v.4 no.2
    • /
    • pp.445-452
    • /
    • 1997
  • In this paper, twicing technique for the improvement of asymptotic boundary bias in nonparametric regression is considered. Asymptotic mean squared errors of the nonparametric regression estimators are derived at the boundary region by twicing the Nadaraya-Waston and local linear smoothing. Asymptotic biases of the resulting estimators are of order$h^2$and$h^4$ respectively.

  • PDF