• Title/Summary/Keyword: Kernel method

Search Result 999, Processing Time 0.025 seconds

Modelling Online Word-of-Mouth Effect on Korean Box-Office Sales Based on Kernel Regression Model

  • Park, Si-Yun;Kim, Jin-Gyo
    • Journal of the Korean Data and Information Science Society
    • /
    • v.18 no.4
    • /
    • pp.995-1004
    • /
    • 2007
  • In this paper, we analyse online word-of-mouth and Korean box-office sales data based on kernel regression method. To do this, we consider the regression model with mixed-data and apply the least square cross-validation method proposed by Li and Racine (2004) to the model. We found the box-office sales can be explained by volume of online word-of-mouth and the characteristics of the movies.

  • PDF

Variable Bandwidth Selection for Kernel Regression

  • Kim, Dae-Hak
    • Journal of the Korean Data and Information Science Society
    • /
    • v.5 no.1
    • /
    • pp.11-20
    • /
    • 1994
  • In recent years, nonparametric kernel estimation of regresion function are abundant and widely applicable to many areas of statistics. Most of modern researches concerned with the fixed global bandwidth selection which can be used in the estimation of regression function with all the same value for all x. In this paper, we propose a method for selecting locally varing bandwidth based on bootstrap method in kernel estimation of fixed design regression. Performance of proposed bandwidth selection method for finite sample case is conducted via Monte Carlo simulation study.

  • PDF

A NEW PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR OPTIMIZATION

  • Cho, Gyeong-Mi
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • v.13 no.1
    • /
    • pp.41-53
    • /
    • 2009
  • A primal-dual interior point method(IPM) not only is the most efficient method for a computational point of view but also has polynomial complexity. Most of polynomialtime interior point methods(IPMs) are based on the logarithmic barrier functions. Peng et al.([14, 15]) and Roos et al.([3]-[9]) proposed new variants of IPMs based on kernel functions which are called self-regular and eligible functions, respectively. In this paper we define a new kernel function and propose a new IPM based on this kernel function which has $O(n^{\frac{2}{3}}log\frac{n}{\epsilon})$ and $O(\sqrt{n}log\frac{n}{\epsilon})$ iteration bounds for large-update and small-update methods, respectively.

  • PDF

A Development of Noparamtric Kernel Function Suitable for Extreme Value (극치값 추정에 적합한 비매개변수적 핵함수 개발)

  • Cha Young-Il;Kim Soon-Bum;Moon Young-Il
    • Journal of Korea Water Resources Association
    • /
    • v.39 no.6 s.167
    • /
    • pp.495-502
    • /
    • 2006
  • The importance of the bandwidth selection has been more emphasized than the kernel function selection for nonparametric frequency analysis since the interpolation is more reliable than the extrapolation method. However, when the extrapolation method is being applied(i.e. recurrence interval more than the length of data or extreme probabilities such as $200{\sim}500$ years), the selection of the kernel function is as important as the selection of the bandwidth. So far, the existing kernel functions have difficulties for extreme value estimations because the values extrapolated by kernel functions are either too small or too big. This paper suggests a Modified Cauchy kernel function that is suitable for both interpolation and extrapolation as an improvement.

Selecting the Optimal Hidden Layer of Extreme Learning Machine Using Multiple Kernel Learning

  • Zhao, Wentao;Li, Pan;Liu, Qiang;Liu, Dan;Liu, Xinwang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.12
    • /
    • pp.5765-5781
    • /
    • 2018
  • Extreme learning machine (ELM) is emerging as a powerful machine learning method in a variety of application scenarios due to its promising advantages of high accuracy, fast learning speed and easy of implementation. However, how to select the optimal hidden layer of ELM is still an open question in the ELM community. Basically, the number of hidden layer nodes is a sensitive hyperparameter that significantly affects the performance of ELM. To address this challenging problem, we propose to adopt multiple kernel learning (MKL) to design a multi-hidden-layer-kernel ELM (MHLK-ELM). Specifically, we first integrate kernel functions with random feature mapping of ELM to design a hidden-layer-kernel ELM (HLK-ELM), which serves as the base of MHLK-ELM. Then, we utilize the MKL method to propose two versions of MHLK-ELMs, called sparse and non-sparse MHLK-ELMs. Both two types of MHLK-ELMs can effectively find out the optimal linear combination of multiple HLK-ELMs for different classification and regression problems. Experimental results on seven data sets, among which three data sets are relevant to classification and four ones are relevant to regression, demonstrate that the proposed MHLK-ELM achieves superior performance compared with conventional ELM and basic HLK-ELM.

Kernel Analysis of Weighted Linear Interpolation Based on Even-Odd Decomposition (짝수 홀수 분해 기반의 가중 선형 보간법을 위한 커널 분석)

  • Oh, Eun-ju;Yoo, Hoon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.22 no.11
    • /
    • pp.1455-1461
    • /
    • 2018
  • This paper presents a kernel analysis of weighted linear interpolation based on even-odd decomposition (EOD). The EOD method has advantages in that it provides low-complexity and improved image quality than the CCI method. However, since the kernel of EOD has not studied before and its analysis has not been addressed yet, this paper proposes the kernel function and its analysis. The kernel function is divided into odd and even terms. And then, the kernel is accomplished by summing the two terms. The proposed kernel is adjustable by a parameter. The parameter influences efficiency in the EOD based WLI process. Also, the kernel shapes are proposed by adjusting the parameter. In addition, the discussion with respect to the parameter is given to understand the parameter. A preliminary experiment on the kernel shape is presented to understand the adjustable parameter and corresponding kernel.

The Paley-Wiener theorem by the heat kernel method

  • Lee, Sun-Mi;Chung, Soon-Yeong
    • Bulletin of the Korean Mathematical Society
    • /
    • v.35 no.3
    • /
    • pp.441-453
    • /
    • 1998
  • We use the heat kernel method to prove newly the Paley-Wiener theorem for the distributions with compact support.

  • PDF

Kernel Regression with Correlation Coefficient Weighted Distance (상관계수 가중법을 이용한 커널회귀 방법)

  • Shin, Ho-Cheol;Park, Moon-Ghu;Lee, Jae-Yong;You, Skin
    • Proceedings of the KIEE Conference
    • /
    • 2006.10c
    • /
    • pp.588-590
    • /
    • 2006
  • Recently, many on-line approaches to instrument channel surveillance (drift monitoring and fault detection) have been reported worldwide. On-line monitoring (OLM) method evaluates instrument channel performance by assessing its consistency with other plant indications through parametric or non-parametric models. The heart of an OLM system is the model giving an estimate of the true process parameter value against individual measurements. This model gives process parameter estimate calculated as a function of other plant measurements which can be used to identify small sensor drifts that would require the sensor to be manually calibrated or replaced. This paper describes an improvement of auto-associative kernel regression by introducing a correlation coefficient weighting on kernel distances. The prediction performance of the developed method is compared with conventional auto-associative kernel regression.

  • PDF

A Study on Kernel Size Adaptation for Correntropy-based Learning Algorithms (코렌트로피 기반 학습 알고리듬의 커널 사이즈에 관한 연구)

  • Kim, Namyong
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.22 no.2
    • /
    • pp.714-720
    • /
    • 2021
  • The ITL (information theoretic learning) based on the kernel density estimation method that has successfully been applied to machine learning and signal processing applications has a drawback of severe sensitiveness in choosing proper kernel sizes. For the maximization of correntropy criterion (MCC) as one of the ITL-type criteria, several methods of adapting the remaining kernel size ( ) after removing the term have been studied. In this paper, it is shown that the main cause of sensitivity in choosing the kernel size derives from the term and that the adaptive adjustment of in the remaining terms leads to approach the absolute value of error, which prevents the weight adjustment from continuing. Thus, it is proposed that choosing an appropriate constant as the kernel size for the remaining terms is more effective. In addition, the experiment results when compared to the conventional algorithm show that the proposed method enhances learning performance by about 2dB of steady state MSE with the same convergence rate. In an experiment for channel models, the proposed method enhances performance by 4 dB so that the proposed method is more suitable for more complex or inferior conditions.

A study on semi-supervised kernel ridge regression estimation (준지도 커널능형회귀모형에 관한 연구)

  • Seok, Kyungha
    • Journal of the Korean Data and Information Science Society
    • /
    • v.24 no.2
    • /
    • pp.341-353
    • /
    • 2013
  • In many practical machine learning and data mining applications, unlabeled data are inexpensive and easy to obtain. Semi-supervised learning try to use such data to improve prediction performance. In this paper, a semi-supervised regression method, semi-supervised kernel ridge regression estimation, is proposed on the basis of kernel ridge regression model. The proposed method does not require a pilot estimation of the label of the unlabeled data. This means that the proposed method has good advantages including less number of parameters, easy computing and good generalization ability. Experiments show that the proposed method can effectively utilize unlabeled data to improve regression estimation.