• Title/Summary/Keyword: Divergence Measure

Search Result 66, Processing Time 0.018 seconds

A NEW EXPONENTIAL DIRECTED DIVERGENCE INFORMATION MEASURE

  • JAIN, K.C.;CHHABRA, PRAPHULL
    • Journal of applied mathematics & informatics
    • /
    • v.34 no.3_4
    • /
    • pp.295-308
    • /
    • 2016
  • Depending upon the nature of the problem, different divergence measures are suitable. So it is always desirable to develop a new divergence measure. In the present work, new information divergence measure, which is exponential in nature, is introduced and characterized. Bounds of this new measure are obtained in terms of various symmetric and non- symmetric measures together with numerical verification by using two discrete distributions: Binomial and Poisson. Fuzzy information measure and Useful information measure corresponding to new exponential divergence measure are also introduced.

Bayesian Model Selection in the Unbalanced Random Effect Model

  • Kim, Dal-Ho;Kang, Sang-Gil;Lee, Woo-Dong
    • Journal of the Korean Data and Information Science Society
    • /
    • v.15 no.4
    • /
    • pp.743-752
    • /
    • 2004
  • In this paper, we develop the Bayesian model selection procedure using the reference prior for comparing two nested model such as the independent and intraclass models using the distance or divergence between the two as the basis of comparison. A suitable criterion for this is the power divergence measure as introduced by Cressie and Read(1984). Such a measure includes the Kullback -Liebler divergence measures and the Hellinger divergence measure as special cases. For this problem, the power divergence measure turns out to be a function solely of $\rho$, the intraclass correlation coefficient. Also, this function is convex, and the minimum is attained at $\rho=0$. We use reference prior for $\rho$. Due to the duality between hypothesis tests and set estimation, the hypothesis testing problem can also be solved by solving a corresponding set estimation problem. The present paper develops Bayesian method based on the Kullback-Liebler and Hellinger divergence measures, rejecting $H_0:\rho=0$ when the specified divergence measure exceeds some number d. This number d is so chosen that the resulting credible interval for the divergence measure has specified coverage probability $1-{\alpha}$. The length of such an interval is compared with the equal two-tailed credible interval and the HPD credible interval for $\rho$ with the same coverage probability which can also be inverted into acceptance regions of $H_0:\rho=0$. Example is considered where the HPD interval based on the one-at- a-time reference prior turns out to be the shortest credible interval having the same coverage probability.

  • PDF

Improving the Performance of Document Clustering with Distributional Similarities (분포유사도를 이용한 문헌클러스터링의 성능향상에 대한 연구)

  • Lee, Jae-Yun
    • Journal of the Korean Society for information Management
    • /
    • v.24 no.4
    • /
    • pp.267-283
    • /
    • 2007
  • In this study, measures of distributional similarity such as KL-divergence are applied to cluster documents instead of traditional cosine measure, which is the most prevalent vector similarity measure for document clustering. Three variations of KL-divergence are investigated; Jansen-Shannon divergence, symmetric skew divergence, and minimum skew divergence. In order to verify the contribution of distributional similarities to document clustering, two experiments are designed and carried out on three test collections. In the first experiment the clustering performances of the three divergence measures are compared to that of cosine measure. The result showed that minimum skew divergence outperformed the other divergence measures as well as cosine measure. In the second experiment second-order distributional similarities are calculated with Pearson correlation coefficient from the first-order similarity matrixes. From the result of the second experiment, secondorder distributional similarities were found to improve the overall performance of document clustering. These results suggest that minimum skew divergence must be selected as document vector similarity measure when considering both time and accuracy, and second-order similarity is a good choice for considering clustering accuracy only.

PATTERSON-SULLIVAN MEASURE AND GROUPS OF DIVERGENCE TYPE

  • Hong, Sungbok
    • Bulletin of the Korean Mathematical Society
    • /
    • v.30 no.2
    • /
    • pp.223-228
    • /
    • 1993
  • In this paper, we use the Patterson-Sullivan measure and results of [H] to show that for a nonelementary discrete group of divergence type, the conical limit set .LAMBDA.$_{c}$ has positive Patterson-Sullivan measure. The definition of the Patterson-Sullivan measure for groups of divergence type is reviewed in section 2. The Patterson-Sullivan measure can also be defined for groups of convergence type and the details for that case can be found in [N]. Necessary definitions and results from [H] are given in section 3, and in section 4, we prove our main result.t.

  • PDF

Modeling and Classification of MPEG VBR Video Data using Gradient-based Fuzzy c_means with Divergence Measure (분산 기반의 Gradient Based Fuzzy c-means 에 의한 MPEG VBR 비디오 데이터의 모델링과 분류)

  • 박동철;김봉주
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.7C
    • /
    • pp.931-936
    • /
    • 2004
  • GBFCM(DM), Gradient-based Fuzzy c-means with Divergence Measure, for efficient clustering of GPDF(Gaussian Probability Density Function) in MPEG VBR video data modeling is proposed in this paper. The proposed GBFCM(DM) is based on GBFCM( Gradient-based Fuzzy c-means) with the Divergence for its distance measure. In this paper, sets of real-time MPEG VBR Video traffic data are considered. Each of 12 frames MPEG VBR Video data are first transformed to 12-dimensional data for modeling and the transformed 12-dimensional data are Pass through the proposed GBFCM(DM) for classification. The GBFCM(DM) is compared with conventional FCM and GBFCM algorithms. The results show that the GBFCM(DM) gives 5∼15% improvement in False Alarm Rate over conventional algorithms such as FCM and GBFCM.

Deriving a New Divergence Measure from Extended Cross-Entropy Error Function

  • Oh, Sang-Hoon;Wakuya, Hiroshi;Park, Sun-Gyu;Noh, Hwang-Woo;Yoo, Jae-Soo;Min, Byung-Won;Oh, Yong-Sun
    • International Journal of Contents
    • /
    • v.11 no.2
    • /
    • pp.57-62
    • /
    • 2015
  • Relative entropy is a divergence measure between two probability density functions of a random variable. Assuming that the random variable has only two alphabets, the relative entropy becomes a cross-entropy error function that can accelerate training convergence of multi-layer perceptron neural networks. Also, the n-th order extension of cross-entropy (nCE) error function exhibits an improved performance in viewpoints of learning convergence and generalization capability. In this paper, we derive a new divergence measure between two probability density functions from the nCE error function. And the new divergence measure is compared with the relative entropy through the use of three-dimensional plots.

ON THE GOODNESS OF FIT TEST FOR DISCRETELY OBSERVED SAMPLE FROM DIFFUSION PROCESSES: DIVERGENCE MEASURE APPROACH

  • Lee, Sang-Yeol
    • Journal of the Korean Mathematical Society
    • /
    • v.47 no.6
    • /
    • pp.1137-1146
    • /
    • 2010
  • In this paper, we study the divergence based goodness of fit test for partially observed sample from diffusion processes. In order to derive the limiting distribution of the test, we study the asymptotic behavior of the residual empirical process based on the observed sample. It is shown that the residual empirical process converges weakly to a Brownian bridge and the associated phi-divergence test has a chi-square limiting null distribution.

NEW INFORMATION INEQUALITIES ON ABSOLUTE VALUE OF THE FUNCTIONS AND ITS APPLICATION

  • CHHABRA, PRAPHULL
    • Journal of applied mathematics & informatics
    • /
    • v.35 no.3_4
    • /
    • pp.371-385
    • /
    • 2017
  • Jain and Saraswat (2012) introduced new generalized f-information divergence measure, by which we obtained many well known and new information divergences. In this work, we introduce new information inequalities in absolute form on this new generalized divergence by considering convex normalized functions. Further, we apply these inequalities for getting new relations among well known divergences, together with numerical verification. Application to the Mutual information is also presented. Asymptotic approximation in terms of Chi- square divergence is done as well.

Void Formation Induced by the Divergence of the Diffusive Ionic Fluxes in Metal Oxides Under Chemical Potential Gradients

  • Maruyama, Toshio;Ueda, Mitsutoshi
    • Journal of the Korean Ceramic Society
    • /
    • v.47 no.1
    • /
    • pp.8-18
    • /
    • 2010
  • When metal oxides are exposed to chemical potential gradients, ions are driven to diffusive mass transport. During this transport process, the divergence of ionic fluxes offers the formation/annihilation of oxides. Therefore, the divergence of ionic flux may play an important role in the void formation in oxides. Kinetic equations were derived for describing chemical potential distribution, ionic fluxes and their divergence in oxides. The divergence was found to be the measure of void formation. Defect chemistry in scales is directly related to the sign of divergence and gives an indication of the void formation behavior. The quantitative estimation on the void formation was successfully applied to a growing magnetite scale in high temperature oxidation of iron at 823 K.

Analysis of Large Tables (대규모 분할표 분석)

  • Choi, Hyun-Jip
    • The Korean Journal of Applied Statistics
    • /
    • v.18 no.2
    • /
    • pp.395-410
    • /
    • 2005
  • For the analysis of large tables formed by many categorical variables, we suggest a method to group the variables into several disjoint groups in which the variables are completely associated within the groups. We use a simple function of Kullback-Leibler divergence as a similarity measure to find the groups. Since the groups are complete hierarchical sets, we can identify the association structure of the large tables by the marginal log-linear models. Examples are introduced to illustrate the suggested method.