• Title/Summary/Keyword: Divergence problem

Search Result 110, Processing Time 0.023 seconds

A NEW EXPONENTIAL DIRECTED DIVERGENCE INFORMATION MEASURE

  • JAIN, K.C.;CHHABRA, PRAPHULL
    • Journal of applied mathematics & informatics
    • /
    • v.34 no.3_4
    • /
    • pp.295-308
    • /
    • 2016
  • Depending upon the nature of the problem, different divergence measures are suitable. So it is always desirable to develop a new divergence measure. In the present work, new information divergence measure, which is exponential in nature, is introduced and characterized. Bounds of this new measure are obtained in terms of various symmetric and non- symmetric measures together with numerical verification by using two discrete distributions: Binomial and Poisson. Fuzzy information measure and Useful information measure corresponding to new exponential divergence measure are also introduced.

FULLY DISCRETE MIXED FINITE ELEMENT METHOD FOR A QUASILINEAR STEFAN PROBLEM WITH A FORCING TERM IN NON-DIVERGENCE FORM

  • Lee, H.Y.;Ohm, M.R.;Shin, J.Y.
    • Journal of applied mathematics & informatics
    • /
    • v.24 no.1_2
    • /
    • pp.191-207
    • /
    • 2007
  • Based on a mixed Galerkin approximation, we construct the fully discrete approximations of $U_y$ as well as U to a single-phase quasilinear Stefan problem with a forcing term in non-divergence form. We prove the optimal convergence of approximation to the solution {U, S} and the superconvergence of approximation to $U_y$.

Bayesian Model Selection in the Unbalanced Random Effect Model

  • Kim, Dal-Ho;Kang, Sang-Gil;Lee, Woo-Dong
    • Journal of the Korean Data and Information Science Society
    • /
    • v.15 no.4
    • /
    • pp.743-752
    • /
    • 2004
  • In this paper, we develop the Bayesian model selection procedure using the reference prior for comparing two nested model such as the independent and intraclass models using the distance or divergence between the two as the basis of comparison. A suitable criterion for this is the power divergence measure as introduced by Cressie and Read(1984). Such a measure includes the Kullback -Liebler divergence measures and the Hellinger divergence measure as special cases. For this problem, the power divergence measure turns out to be a function solely of $\rho$, the intraclass correlation coefficient. Also, this function is convex, and the minimum is attained at $\rho=0$. We use reference prior for $\rho$. Due to the duality between hypothesis tests and set estimation, the hypothesis testing problem can also be solved by solving a corresponding set estimation problem. The present paper develops Bayesian method based on the Kullback-Liebler and Hellinger divergence measures, rejecting $H_0:\rho=0$ when the specified divergence measure exceeds some number d. This number d is so chosen that the resulting credible interval for the divergence measure has specified coverage probability $1-{\alpha}$. The length of such an interval is compared with the equal two-tailed credible interval and the HPD credible interval for $\rho$ with the same coverage probability which can also be inverted into acceptance regions of $H_0:\rho=0$. Example is considered where the HPD interval based on the one-at- a-time reference prior turns out to be the shortest credible interval having the same coverage probability.

  • PDF

AN ITERATIVE DISTRIBUTED SOURCE METHOD FOR THE DIVERGENCE OF SOURCE CURRENT IN EEG INVERSE PROBLEM

  • Choi, Jong-Ho;Lee, Chnag-Ock;Jung, Hyun-Kyo
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • v.12 no.3
    • /
    • pp.191-199
    • /
    • 2008
  • This paper proposes a new method for the inverse problem of the three-dimensional reconstruction of the electrical activity of the brain from electroencephalography (EEG). Compared to conventional direct methods using additional parameters, the proposed approach solves the EEG inverse problem iteratively without any parameter. We describe the Lagrangian corresponding to the minimization problem and suggest the numerical inverse algorithm. The restriction of influence space and the lead field matrix reduce the computational cost in this approach. The reconstructed divergence of primary current converges to a reasonable distribution for three dimensional sphere head model.

  • PDF

UNDERSTANDING NON-NEGATIVE MATRIX FACTORIZATION IN THE FRAMEWORK OF BREGMAN DIVERGENCE

  • KIM, KYUNGSUP
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • v.25 no.3
    • /
    • pp.107-116
    • /
    • 2021
  • We introduce optimization algorithms using Bregman Divergence for solving non-negative matrix factorization (NMF) problems. Bregman divergence is known a generalization of some divergences such as Frobenius norm and KL divergence and etc. Some algorithms can be applicable to not only NMF with Frobenius norm but also NMF with more general Bregman divergence. Matrix Factorization is a popular non-convex optimization problem, for which alternating minimization schemes are mostly used. We develop the Bregman proximal gradient method applicable for all NMF formulated in any Bregman divergences. In the derivation of NMF algorithm for Bregman divergence, we need to use majorization/minimization(MM) for a proper auxiliary function. We present algorithmic aspects of NMF for Bregman divergence by using MM of auxiliary function.

Test for Parameter Change based on the Estimator Minimizing Density-based Divergence Measures

  • Na, Ok-Young;Lee, Sang-Yeol;Park, Si-Yun
    • Proceedings of the Korean Statistical Society Conference
    • /
    • 2003.05a
    • /
    • pp.287-293
    • /
    • 2003
  • In this paper we consider the problem of parameter change based on the cusum test proposed by Lee et al. (2003). The cusum test statistic is constructed utilizing the estimator minimizing density-based divergence measures. It is shown that under regularity conditions, the test statistic has the limiting distribution of the sup of standard Brownian bridge. Simulation results demonstrate that the cusum test is robust when there arc outliers.

  • PDF

Automatic Selection of the Turning Parametter in the Minimum Density Power Divergence Estimation

  • Changkon Hong;Kim, Youngseok
    • Journal of the Korean Statistical Society
    • /
    • v.30 no.3
    • /
    • pp.453-465
    • /
    • 2001
  • It is often the case that one wants to estimate parameters of the distribution which follows certain parametric model, while the dta are contaminated. it is well known that the maximum likelihood estimators are not robust to contamination. Basuet al.(1998) proposed a robust method called the minimum density power divergence estimation. In this paper, we investigate data-driven selection of the tuning parameter $\alpha$ in the minimum density power divergence estimation. A criterion is proposed and its performance is studied through the simulation. The simulation includes three cases of estimation problem.

  • PDF

Convergence study of traditional 2D/1D coupling method for k-eigenvalue neutron transport problems with Fourier analysis

  • Boran Kong ;Kaijie Zhu ;Han Zhang ;Chen Hao ;Jiong Guo ;Fu Li
    • Nuclear Engineering and Technology
    • /
    • v.55 no.4
    • /
    • pp.1350-1364
    • /
    • 2023
  • 2D/1D coupling method is an important neutron transport calculation method due to its high accuracy and relatively low computation cost. However, 2D/1D coupling method may diverge especially in small axial mesh size. To analyze the convergence behavior of 2D/1D coupling method, a Fourier analysis for k-eigenvalue neutron transport problems is implemented. The analysis results present the divergence problem of 2D/1D coupling method in small axial mesh size. Several common attempts are made to solve the divergence problem, which are to increase the number of inner iterations of the 2D or 1D calculation, and two times 1D calculations per outer iteration. However, these attempts only could improve the convergence rate but cannot deal with the divergence problem of 2D/1D coupling method thoroughly. Moreover, the choice of axial solvers, such as DGFEM SN and traditional SN, and its effect on the convergence behavior are also discussed. The results show that the choice of axial solver is a key point for the convergence of 2D/1D method. The DGFEM SN based 2D/1D method could converge within a wide range of optical thickness region, which is superior to that of traditional SN method.

Robust Non-negative Matrix Factorization with β-Divergence for Speech Separation

  • Li, Yinan;Zhang, Xiongwei;Sun, Meng
    • ETRI Journal
    • /
    • v.39 no.1
    • /
    • pp.21-29
    • /
    • 2017
  • This paper addresses the problem of unsupervised speech separation based on robust non-negative matrix factorization (RNMF) with ${\beta}$-divergence, when neither speech nor noise training data is available beforehand. We propose a robust version of non-negative matrix factorization, inspired by the recently developed sparse and low-rank decomposition, in which the data matrix is decomposed into the sum of a low-rank matrix and a sparse matrix. Efficient multiplicative update rules to minimize the ${\beta}$-divergence-based cost function are derived. A convolutional extension of the proposed algorithm is also proposed, which considers the time dependency of the non-negative noise bases. Experimental speech separation results show that the proposed convolutional RNMF successfully separates the repeating time-varying spectral structures from the magnitude spectrum of the mixture, and does so without any prior training.