• Title/Summary/Keyword: Kullback-Leibler method

Search Result 38, Processing Time 0.023 seconds

Analysis of Large Tables (대규모 분할표 분석)

  • Choi, Hyun-Jip
    • The Korean Journal of Applied Statistics
    • /
    • v.18 no.2
    • /
    • pp.395-410
    • /
    • 2005
  • For the analysis of large tables formed by many categorical variables, we suggest a method to group the variables into several disjoint groups in which the variables are completely associated within the groups. We use a simple function of Kullback-Leibler divergence as a similarity measure to find the groups. Since the groups are complete hierarchical sets, we can identify the association structure of the large tables by the marginal log-linear models. Examples are introduced to illustrate the suggested method.

Video Content Indexing using Kullback-Leibler Distance

  • Kim, Sang-Hyun
    • International Journal of Contents
    • /
    • v.5 no.4
    • /
    • pp.51-54
    • /
    • 2009
  • In huge video databases, the effective video content indexing method is required. While manual indexing is the most effective approach to this goal, it is slow and expensive. Thus automatic indexing is desirable and recently various indexing tools for video databases have been developed. For efficient video content indexing, the similarity measure is an important factor. This paper presents new similarity measures between frames and proposes a new algorithm to index video content using Kullback-Leibler distance defined between two histograms. Experimental results show that the proposed algorithm using Kullback-Leibler distance gives remarkable high accuracy ratios compared with several conventional algorithms to index video content.

A study on bandwith selection based on ASE for nonparametric density estimators

  • Kim, Tae-Yoon
    • Journal of the Korean Statistical Society
    • /
    • v.29 no.3
    • /
    • pp.307-313
    • /
    • 2000
  • Suppose we have a set of data X1, ···, Xn and employ kernel density estimator to estimate the marginal density of X. in this article bandwith selection problem for kernel density estimator is examined closely. In particular the Kullback-Leibler method (a bandwith selection methods based on average square error (ASE)) is considered.

  • PDF

Performance Improvement of Ensemble Speciated Neural Networks using Kullback-Leibler Entropy (Kullback-Leibler 엔트로피를 이용한 종분화 신경망 결합의 성능향상)

  • Kim, Kyung-Joong;Cho, Sung-Bae
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.51 no.4
    • /
    • pp.152-159
    • /
    • 2002
  • Fitness sharing that shares fitness if calculated distance between individuals is smaller than sharing radius is one of the representative speciation methods and can complement evolutionary algorithm which converges one solution. Recently, there are many researches on designing neural network architecture using evolutionary algorithm but most of them use only the fittest solution in the last generation. In this paper, we elaborate generating diverse neural networks using fitness sharing and combing them to compute outputs then, propose calculating distance between individuals using modified Kullback-Leibler entropy for improvement of fitness sharing performance. In the experiment of Australian credit card assessment, breast cancer, and diabetes in UCI database, proposed method performs better than not only simple average output or Pearson Correlation but also previous published methods.

Class Determination Based on Kullback-Leibler Distance in Heart Sound Classification

  • Chung, Yong-Joo;Kwak, Sung-Woo
    • The Journal of the Acoustical Society of Korea
    • /
    • v.27 no.2E
    • /
    • pp.57-63
    • /
    • 2008
  • Stethoscopic auscultation is still one of the primary tools for the diagnosis of heart diseases due to its easy accessibility and relatively low cost. It is, however, a difficult skill to acquire. Many research efforts have been done on the automatic classification of heart sound signals to support clinicians in heart sound diagnosis. Recently, hidden Markov models (HMMs) have been used quite successfully in the automatic classification of the heart sound signal. However, in the classification using HMMs, there are so many heart sound signal types that it is not reasonable to assign a new class to each of them. In this paper, rather than constructing an HMM for each signal type, we propose to build an HMM for a set of acoustically-similar signal types. To define the classes, we use the KL (Kullback-Leibler) distance between different signal types to determine if they should belong to the same class. From the classification experiments on the heart sound data consisting of 25 different types of signals, the proposed method proved to be quite efficient in determining the optimal set of classes. Also we found that the class determination approach produced better results than the heuristic class assignment method.

An Information-theoretic Approach for Value-Based Weighting in Naive Bayesian Learning (나이브 베이시안 학습에서 정보이론 기반의 속성값 가중치 계산방법)

  • Lee, Chang-Hwan
    • Journal of KIISE:Databases
    • /
    • v.37 no.6
    • /
    • pp.285-291
    • /
    • 2010
  • In this paper, we propose a new paradigm of weighting methods for naive Bayesian learning. We propose more fine-grained weighting methods, called value weighting method, in the context of naive Bayesian learning. While the current weighting methods assign a weight to an attribute, we assign a weight to an attribute value. We develop new methods, using Kullback-Leibler function, for both value weighting and feature weighting in the context of naive Bayesian. The performance of the proposed methods has been compared with the attribute weighting method and general naive bayesian. The proposed method shows better performance in most of the cases.

CONDITIONAL LARGE DEVIATIONS FOR 1-LATTICE DISTRIBUTIONS

  • Kim, Gie-Whan
    • The Pure and Applied Mathematics
    • /
    • v.4 no.1
    • /
    • pp.97-104
    • /
    • 1997
  • The large deviations theorem of Cramer is extended to conditional probabilities in the following sense. Consider a random sample of pairs of random vectors and the sample means of each of the pairs. The probability that the first falls outside a certain convex set given that the second is fixed is shown to decrease with the sample size at an exponential rate which depends on the Kullback-Leibler distance between two distributions in an associated exponential familiy of distributions. Examples are given which include a method of computing the Bahadur exact slope for tests of certain composite hypotheses in exponential families.

  • PDF

On the Bias of Bootstrap Model Selection Criteria

  • Kee-Won Lee;Songyong Sim
    • Journal of the Korean Statistical Society
    • /
    • v.25 no.2
    • /
    • pp.195-203
    • /
    • 1996
  • A bootstrap method is used to correct the apparent downward bias of a naive plug-in bootstrap model selection criterion, which is shown to enjoy a high degree of accuracy. Comparison of bootstrap method with the asymptotic method is made through an illustrative example.

  • PDF

A DoS Detection Method Based on Composition Self-Similarity

  • Jian-Qi, Zhu;Feng, Fu;Kim, Chong-Kwon;Ke-Xin, Yin;Yan-Heng, Liu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.6 no.5
    • /
    • pp.1463-1478
    • /
    • 2012
  • Based on the theory of local-world network, the composition self-similarity (CSS) of network traffic is presented for the first time in this paper for the study of DoS detection. We propose the concept of composition distribution graph and design the relative operations. The $(R/S)^d$ algorithm is designed for calculating the Hurst parameter. Based on composition distribution graph and Kullback Leibler (KL) divergence, we propose the composition self-similarity anomaly detection (CSSD) method for the detection of DoS attacks. We evaluate the effectiveness of the proposed method. Compared to other entropy based anomaly detection methods, our method is more accurate and with higher sensitivity in the detection of DoS attacks.

Image Restoration Algorithms by using Fisher Information (피셔 인포메이션을 이용한 영상 복원 알고리즘)

  • 오춘석;이현민;신승중;유영기
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.6
    • /
    • pp.89-97
    • /
    • 2004
  • An object to reflect or emit light is captured by imaging system as distorted image due to various distortion. It is called image restoration that estimates original object by removing distortion. There are two categories in image restoration method. One is a deterministic method and the other is a stochastic method. In this paper, image restoration using Minimum Fisher Information(MFI), derived from B. Roy Frieden is proposed. In MFI restoration, experimental results to be made according to noise control parameter were investigated. And cross entropy(Kullback-Leibler entropy) was used as a standard measure of restoration accuracy, It is confirmed that restoration results using MFI have various roughness according to noise control parameter.