• Title/Summary/Keyword: Kullback-Leibler

Search Result 88, Processing Time 0.028 seconds

Kullback-Leibler Information of Consecutive Order Statistics

  • Kim, Ilmun;Park, Sangun
    • Communications for Statistical Applications and Methods
    • /
    • v.22 no.5
    • /
    • pp.487-494
    • /
    • 2015
  • A calculation of the Kullback-Leibler information of consecutive order statistics is complicated because it depends on a multi-dimensional integral. Park (2014) discussed a representation of the Kullback-Leibler information of the first r order statistics in terms of the hazard function and simplified the r-fold integral to a single integral. In this paper, we first express the Kullback-Leibler information in terms of the reversed hazard function. Then we establish a generalized result of Park (2014) to an arbitrary consecutive order statistics. We derive a single integral form of the Kullback-Leibler information of an arbitrary block of order statistics; in addition, its relation to the Fisher information of order statistics is discussed with numerical examples provided.

On Estimating of Kullback-Leibler Information Function using Three Step Stress Accelerated Life Test

  • Park, Byung-Gu;Yoon, Sang-Chul;Cho, Ji-Young
    • International Journal of Reliability and Applications
    • /
    • v.1 no.2
    • /
    • pp.155-165
    • /
    • 2000
  • In this paper, we propose some estimators of Kullback- Leibler Information functions using the data from three step stress accelerated life tests. This acceleration model is assumed to be a tampered random variable model. Some asymptotic properties of proposed estimators are proved. Simulations are performed for comparing the small sample properties of the proposed estimators under use condition of accelerated life test.

  • PDF

Exponentiality Test of the Three Step-Stress Accelerated Life Testing Model based on Kullback-Leibler Information

  • Park, Byung-Gu;Yoon, Sang-Chul;Lee, Jeong-Eun
    • Journal of the Korean Data and Information Science Society
    • /
    • v.14 no.4
    • /
    • pp.951-963
    • /
    • 2003
  • In this paper, we propose goodness of fit test statistics based on the estimated Kullback-Leibler information functions using the data from three step stress accelerated life test. This acceleration model is assumed to be a tampered random variable model. The power of the proposed test under various alternatives is compared with Kolmogorov-Smirnov statistic, Cramer-von Mises statistic and Anderson-Darling statistic.

  • PDF

Kullback-Leibler Information in View of an Extended Version of κ-Records

  • Ahmadi, Mosayeba;Mohtashami Borzadaran, G.R.
    • Communications for Statistical Applications and Methods
    • /
    • v.20 no.1
    • /
    • pp.1-13
    • /
    • 2013
  • This paper introduces an extended version of ${\kappa}$-records. Kullback-Leibler (K-L) information between two generalized distributions arising from ${\kappa}$-records is derived; subsequently, it is shown that K-L information does not depend on the baseline distribution. The behavior of K-L information for order statistics and ${\kappa}$-records, is studied. The exact expressions for K-L information between distributions of order statistics and upper (lower) ${\kappa}$-records are obtained and some special cases are provided.

Analysis of Large Tables (대규모 분할표 분석)

  • Choi, Hyun-Jip
    • The Korean Journal of Applied Statistics
    • /
    • v.18 no.2
    • /
    • pp.395-410
    • /
    • 2005
  • For the analysis of large tables formed by many categorical variables, we suggest a method to group the variables into several disjoint groups in which the variables are completely associated within the groups. We use a simple function of Kullback-Leibler divergence as a similarity measure to find the groups. Since the groups are complete hierarchical sets, we can identify the association structure of the large tables by the marginal log-linear models. Examples are introduced to illustrate the suggested method.

Video Content Indexing using Kullback-Leibler Distance

  • Kim, Sang-Hyun
    • International Journal of Contents
    • /
    • v.5 no.4
    • /
    • pp.51-54
    • /
    • 2009
  • In huge video databases, the effective video content indexing method is required. While manual indexing is the most effective approach to this goal, it is slow and expensive. Thus automatic indexing is desirable and recently various indexing tools for video databases have been developed. For efficient video content indexing, the similarity measure is an important factor. This paper presents new similarity measures between frames and proposes a new algorithm to index video content using Kullback-Leibler distance defined between two histograms. Experimental results show that the proposed algorithm using Kullback-Leibler distance gives remarkable high accuracy ratios compared with several conventional algorithms to index video content.

Performance Improvement of Ensemble Speciated Neural Networks using Kullback-Leibler Entropy (Kullback-Leibler 엔트로피를 이용한 종분화 신경망 결합의 성능향상)

  • Kim, Kyung-Joong;Cho, Sung-Bae
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.51 no.4
    • /
    • pp.152-159
    • /
    • 2002
  • Fitness sharing that shares fitness if calculated distance between individuals is smaller than sharing radius is one of the representative speciation methods and can complement evolutionary algorithm which converges one solution. Recently, there are many researches on designing neural network architecture using evolutionary algorithm but most of them use only the fittest solution in the last generation. In this paper, we elaborate generating diverse neural networks using fitness sharing and combing them to compute outputs then, propose calculating distance between individuals using modified Kullback-Leibler entropy for improvement of fitness sharing performance. In the experiment of Australian credit card assessment, breast cancer, and diabetes in UCI database, proposed method performs better than not only simple average output or Pearson Correlation but also previous published methods.

A study on bandwith selection based on ASE for nonparametric density estimators

  • Kim, Tae-Yoon
    • Journal of the Korean Statistical Society
    • /
    • v.29 no.3
    • /
    • pp.307-313
    • /
    • 2000
  • Suppose we have a set of data X1, ···, Xn and employ kernel density estimator to estimate the marginal density of X. in this article bandwith selection problem for kernel density estimator is examined closely. In particular the Kullback-Leibler method (a bandwith selection methods based on average square error (ASE)) is considered.

  • PDF

An Estimation of Cumulative Exposure Model based on Kullback-Leibler Information Function (쿨백-라이블러 정보함수를 이용한 누적노출모형 추정)

  • 안정향;윤상철
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.9 no.2
    • /
    • pp.1-8
    • /
    • 2004
  • In this paper, we propose three estimators of Kullback-Leibler Information functions using the data from accelerated life tests. This acceleration model is assumed to be a cumulative exposure model. Some asymptotic properties of proposed estimators are proved. Simulations are performed for comparing the small sample properties of the proposed estimators under use condition of accelerated life test.

  • PDF

Test of Exponentiality in Step Stress Accelerated Life test Model based on Kullback­Leibler Information Function (쿨백­라이블러 정보함수 이용한 단계 스트레스 가속수명모형의 지수성 검정)

  • 박병구;윤상철
    • Journal of Korean Society for Quality Management
    • /
    • v.31 no.4
    • /
    • pp.194-202
    • /
    • 2003
  • In this paper, we propose goodness of fit test statistics for exponentiality in accelerated life tests data based on Kullback­Leibler information functions. This acceleration model is assumed to be a tampered random variable model. The procedure is applicable when the exponential parameter using the data from accelerated life tests is or is not specified under null hypothesis. And we compare the power of the proposed test statistics with Kolmogorov­Smirnov, Cramer von Mises and Anderson­Darling statistics in the small sample.