• Title/Summary/Keyword: Kullback-Leibler divergence

Search Result 43, Processing Time 0.023 seconds

An improved fuzzy c-means method based on multivariate skew-normal distribution for brain MR image segmentation

  • Guiyuan Zhu;Shengyang Liao;Tianming Zhan;Yunjie Chen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.8
    • /
    • pp.2082-2102
    • /
    • 2024
  • Accurate segmentation of magnetic resonance (MR) images is crucial for providing doctors with effective quantitative information for diagnosis. However, the presence of weak boundaries, intensity inhomogeneity, and noise in the images poses challenges for segmentation models to achieve optimal results. While deep learning models can offer relatively accurate results, the scarcity of labeled medical imaging data increases the risk of overfitting. To tackle this issue, this paper proposes a novel fuzzy c-means (FCM) model that integrates a deep learning approach. To address the limited accuracy of traditional FCM models, which employ Euclidean distance as a distance measure, we introduce a measurement function based on the skewed normal distribution. This function enables us to capture more precise information about the distribution of the image. Additionally, we construct a regularization term based on the Kullback-Leibler (KL) divergence of high-confidence deep learning results. This regularization term helps enhance the final segmentation accuracy of the model. Moreover, we incorporate orthogonal basis functions to estimate the bias field and integrate it into the improved FCM method. This integration allows our method to simultaneously segment the image and estimate the bias field. The experimental results on both simulated and real brain MR images demonstrate the robustness of our method, highlighting its superiority over other advanced segmentation algorithms.

Variable Selection with Log-Density in Logistic Regression Model (로지스틱회귀모형에서 로그-밀도비를 이용한 변수의 선택)

  • Kahng, Myung-Wook;Shin, Eun-Young
    • Communications for Statistical Applications and Methods
    • /
    • v.19 no.1
    • /
    • pp.1-11
    • /
    • 2012
  • We present methods to study the log-density ratio of the conditional densities of the predictors given the response variable in the logistic regression model. This allows us to select which predictors are needed and how they should be included in the model. If the conditional distributions are skewed, the distributions can be considered as gamma distributions. A simulation study shows that the linear and log terms are required in general. If the conditional distributions of xjy for the two groups overlap significantly, we need both the linear and log terms; however, only the linear or log term is needed in the model if they are well separated.

A Bayesian cure rate model with dispersion induced by discrete frailty

  • Cancho, Vicente G.;Zavaleta, Katherine E.C.;Macera, Marcia A.C.;Suzuki, Adriano K.;Louzada, Francisco
    • Communications for Statistical Applications and Methods
    • /
    • v.25 no.5
    • /
    • pp.471-488
    • /
    • 2018
  • In this paper, we propose extending proportional hazards frailty models to allow a discrete distribution for the frailty variable. Having zero frailty can be interpreted as being immune or cured. Thus, we develop a new survival model induced by discrete frailty with zero-inflated power series distribution, which can account for overdispersion. This proposal also allows for a realistic description of non-risk individuals, since individuals cured due to intrinsic factors (immunes) are modeled by a deterministic fraction of zero-risk while those cured due to an intervention are modeled by a random fraction. We put the proposed model in a Bayesian framework and use a Markov chain Monte Carlo algorithm for the computation of posterior distribution. A simulation study is conducted to assess the proposed model and the computation algorithm. We also discuss model selection based on pseudo-Bayes factors as well as developing case influence diagnostics for the joint posterior distribution through ${\psi}-divergence$ measures. The motivating cutaneous melanoma data is analyzed for illustration purposes.

Factor Graph-based Multipath-assisted Indoor Passive Localization with Inaccurate Receiver

  • Hao, Ganlin;Wu, Nan;Xiong, Yifeng;Wang, Hua;Kuang, Jingming
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.2
    • /
    • pp.703-722
    • /
    • 2016
  • Passive wireless devices have increasing civilian and military applications, especially in the scenario with wearable devices and Internet of Things. In this paper, we study indoor localization of a target equipped with radio-frequency identification (RFID) device in ultra-wideband (UWB) wireless networks. With known room layout, deterministic multipath components, including the line-of-sight (LOS) signal and the reflected signals via multipath propagation, are employed to locate the target with one transmitter and a single inaccurate receiver. A factor graph corresponding to the joint posterior position distribution of target and receiver is constructed. However, due to the mixed distribution in the factor node of likelihood function, the expressions of messages are intractable by directly applying belief propagation on factor graph. To this end, we approximate the messages by Gaussian distribution via minimizing the Kullback-Leibler divergence (KLD) between them. Accordingly, a parametric message passing algorithm for indoor passive localization is derived, in which only the means and variances of Gaussian distributions have to be updated. Performance of the proposed algorithm and the impact of critical parameters are evaluated by Monte Carlo simulations, which demonstrate the superior performance in localization accuracy and the robustness to the statistics of multipath channels.

Distributed Target Localization with Inaccurate Collaborative Sensors in Multipath Environments

  • Feng, Yuan;Yan, Qinsiwei;Tseng, Po-Hsuan;Hao, Ganlin;Wu, Nan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.5
    • /
    • pp.2299-2318
    • /
    • 2019
  • Location-aware networks are of great importance for both civil lives and military applications. Methods based on line-of-sight (LOS) measurements suffer sever performance loss in harsh environments such as indoor scenarios, where sensors can receive both LOS and non-line-of-sight (NLOS) measurements. In this paper, we propose a data association (DA) process based on the expectation maximization (EM) algorithm, which enables us to exploit multipath components (MPCs). By setting the mapping relationship between the measurements and scatters as a latent variable, coefficients of the Gaussian mixture model are estimated. Moreover, considering the misalignment of sensor position, we propose a space-alternating generalized expectation maximization (SAGE)-based algorithms to jointly update the target localization and sensor position information. A two dimensional (2-D) circularly symmetric Gaussian distribution is employed to approximate the probability density function of the sensor's position uncertainty via the minimization of the Kullback-Leibler divergence (KLD), which enables us to calculate the expectation step with low computational complexity. Moreover, a distributed implementation is derived based on the average consensus method to improve the scalability of the proposed algorithm. Simulation results demonstrate that the proposed centralized and distributed algorithms can perform close to the Monte Carlo-based method with much lower communication overhead and computational complexity.

Satellite Building Segmentation using Deformable Convolution and Knowledge Distillation (변형 가능한 컨볼루션 네트워크와 지식증류 기반 위성 영상 빌딩 분할)

  • Choi, Keunhoon;Lee, Eungbean;Choi, Byungin;Lee, Tae-Young;Ahn, JongSik;Sohn, Kwanghoon
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.7
    • /
    • pp.895-902
    • /
    • 2022
  • Building segmentation using satellite imagery such as EO (Electro-Optical) and SAR (Synthetic-Aperture Radar) images are widely used due to their various uses. EO images have the advantage of having color information, and they are noise-free. In contrast, SAR images can identify the physical characteristics and geometrical information that the EO image cannot capture. This paper proposes a learning framework for efficient building segmentation that consists of a teacher-student-based privileged knowledge distillation and deformable convolution block. The teacher network utilizes EO and SAR images simultaneously to produce richer features and provide them to the student network, while the student network only uses EO images. To do this, we present objective functions that consist of Kullback-Leibler divergence loss and knowledge distillation loss. Furthermore, we introduce deformable convolution to avoid pixel-level noise and efficiently capture hard samples such as small and thin buildings at the global level. Experimental result shows that our method outperforms other methods and efficiently captures complex samples such as a small or narrow building. Moreover, Since our method can be applied to various methods.

Image Processing of Pseudo-rate-distortion Function Based on MSSSIM and KL-Divergence, Using Multiple Video Processing Filters for Video Compression (MSSSIM 및 쿨백-라이블러 발산 기반 의사 율-왜곡 평가 함수와 복수개의 영상처리 필터를 이용한 동영상 전처리 방법)

  • Seok, Jinwuk;Cho, Seunghyun;Kim, Hui Yong;Choi, Jin Soo
    • Journal of Broadcast Engineering
    • /
    • v.23 no.6
    • /
    • pp.768-779
    • /
    • 2018
  • In this paper, we propose a novel video quality function for video processing based on MSSSIM to select an appropriate video processing filter and to accommodate multiple processing filters to each pixel block in a picture frame by a mathematical selection law so as to maintain video quality and to reduce the bitrate of compressed video. In viewpoint of video compression, since the properties of video quality and bitrate is different for each picture of video frames and for each areas in the same frame, it is difficult for the video filter with single property to satisfy the object of increasing video quality and decreasing bitrate. Consequently, to maintain the subjective video quality in spite of decreasing bitrate, we propose the methodology about the MSSSIM as the measure of subjective video quality, the KL-Divergence as the measure of bitrate, and the combination method of those two measurements. Moreover, using the proposed combinatorial measurement, when we use the multiple image filters with mutually different properties as a pre-processing filter for video, we can verify that it is possible to compress video with maintaining the video quality under decreasing the bitrate, as possible.

The Strength of the Relationship between Semantic Similarity and the Subcategorization Frames of the English Verbs: a Stochastic Test based on the ICE-GB and WordNet (영어 동사의 의미적 유사도와 논항 선택 사이의 연관성 : ICE-GB와 WordNet을 이용한 통계적 검증)

  • Song, Sang-Houn;Choe, Jae-Woong
    • Language and Information
    • /
    • v.14 no.1
    • /
    • pp.113-144
    • /
    • 2010
  • The primary goal of this paper is to find a feasible way to answer the question: Does the similarity in meaning between verbs relate to the similarity in their subcategorization? In order to answer this question in a rather concrete way on the basis of a large set of English verbs, this study made use of various language resources, tools, and statistical methodologies. We first compiled a list of 678 verbs that were selected from the most and second most frequent word lists from the Colins Cobuild English Dictionary, which also appeared in WordNet 3.0. We calculated similarity measures between all the pairs of the words based on the 'jcn' algorithm (Jiang and Conrath, 1997) implemented in the WordNet::Similarity module (Pedersen, Patwardhan, and Michelizzi, 2004). The clustering process followed, first building similarity matrices out of the similarity measure values, next drawing dendrograms on the basis of the matricies, then finally getting 177 meaningful clusters (covering 437 verbs) that passed a certain level set by z-score. The subcategorization frames and their frequency values were taken from the ICE-GB. In order to calculate the Selectional Preference Strength (SPS) of the relationship between a verb and its subcategorizations, we relied on the Kullback-Leibler Divergence model (Resnik, 1996). The SPS values of the verbs in the same cluster were compared with each other, which served to give the statistical values that indicate how much the SPS values overlap between the subcategorization frames of the verbs. Our final analysis shows that the degree of overlap, or the relationship between semantic similarity and the subcategorization frames of the verbs in English, is equally spread out from the 'very strongly related' to the 'very weakly related'. Some semantically similar verbs share a lot in terms of their subcategorization frames, and some others indicate an average degree of strength in the relationship, while the others, though still semantically similar, tend to share little in their subcategorization frames.

  • PDF

Multi Agents-Multi Tasks Assignment Problem using Hybrid Cross-Entropy Algorithm (혼합 교차-엔트로피 알고리즘을 활용한 다수 에이전트-다수 작업 할당 문제)

  • Kim, Gwang
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.27 no.4
    • /
    • pp.37-45
    • /
    • 2022
  • In this paper, a multi agent-multi task assignment problem, which is a representative problem of combinatorial optimization, is presented. The objective of the problem is to determine the coordinated agent-task assignment that maximizes the sum of the achievement rates of each task. The achievement rate is represented as a concave down increasing function according to the number of agents assigned to the task. The problem is expressed as an NP-hard problem with a non-linear objective function. In this paper, to solve the assignment problem, we propose a hybrid cross-entropy algorithm as an effective and efficient solution methodology. In fact, the general cross-entropy algorithm might have drawbacks (e.g., slow update of parameters and premature convergence) according to problem situations. Compared to the general cross-entropy algorithm, the proposed method is designed to be less likely to have the two drawbacks. We show that the performances of the proposed methods are better than those of the general cross-entropy algorithm through numerical experiments.

Experimental performance analysis on the non-negative matrix factorization-based continuous wave reverberation suppression according to hyperparameters (비음수행렬분해 기반 연속파 잔향 제거 기법의 초매개변숫값에 따른 실험적 성능 분석)

  • Yongon Lee; Seokjin Lee;Kiman Kim;Geunhwan Kim
    • The Journal of the Acoustical Society of Korea
    • /
    • v.42 no.1
    • /
    • pp.32-41
    • /
    • 2023
  • Recently, studies on reverberation suppression using Non-negative Matrix Factorization (NMF) have been actively conducted. The NMF method uses a cost function based on the Kullback-Leibler divergence for optimization. And some constraints are added such as temporal continuity, pulse length, and energy ratio between reverberation and target. The tendency of constraints are controlled by hyperparameters. Therefore, in order to effectively suppress reverberation, hyperparameters need to be optimized. However, related studies are insufficient so far. In this paper, the reverberation suppression performance according to the three hyperparameters of the NMF was analyzed by using sea experimental data. As a result of analysis, when the value of hyperparameters for time continuity and pulse length were high, the energy ratio between the reverberation and the target showed better performance at less than 0.4, but it was confirmed that there was variability depending on the ocean environment. It is expected that the analysis results in this paper will be utilized as a useful guideline for planning precise experiments for optimizing hyperparameters of NMF in the future.