• 제목/요약/키워드: Kullback-Leibler Information

검색결과 60건 처리시간 0.028초

영역 기반의 Multi-level Thresholding에 의한 컬러 영상 분할 (Region-based Multi-level Thresholding for Color Image Segmentation)

  • 오준택;김욱현
    • 대한전자공학회논문지SP
    • /
    • 제43권6호
    • /
    • pp.20-27
    • /
    • 2006
  • Multi-level thresholding은 영상 분할 방법 중 하나로 널리 이용되고 있지만 대부분의 기존 논문들은 응용 분야에 직접적으로 이용되기에는 적합하지 않거나 영상 분할 단계까지 확장되지 않고 있다. 본 논문에서는 영상 분할을 위한 multi-level thresholding 방안으로써 영역 단위의 multi-level thresholding을 제안한다. 먼저, 영상의 색상별 성분에 대해서 EWFCM(Entropy-based Weighted Fuzzy C-Means) 알고리즘을 적용하여 2개의 군집으로 분류한 후 코드 영상을 생성한다. EWFCM 알고리즘은 화소들에 대한 공간 정보를 추가한 개선된 FCM 알고리즘으로 영상 내 존재하는 잡음을 제거한다. 그리고 코드 영상에 존재하는 군집의 수를 감소함으로써 좀 더 나은 영상 분할 결과를 얻을 수 있으며 군집의 감소는 하나의 군집내에 존재하는 영역들과 나머지 군집들간의 유사도를 기반으로 영역을 재분류함으로써 처리된다. 그러나 영상에는 여전히 많은 영역들이 존재하기 때문에 이를 해결하기 위한 하나의 후처리 방안으로써 영역간의 Kullback-Leibler 거리값을 기반으로 Bayesian 알고리즘에 의한 영역 합병을 수행한다. 실험 결과 제안한 영역 기반의 multi-level thresholding은 기존 방법이나 화소나 군집 기반의 multi-level thresholding보다 좋은 분할 결과를 보였으며 Bayesian 알고리즘을 이용한 후처리 방안에 의해 좀 더 나은 결과를 보였다.

Factor Graph-based Multipath-assisted Indoor Passive Localization with Inaccurate Receiver

  • Hao, Ganlin;Wu, Nan;Xiong, Yifeng;Wang, Hua;Kuang, Jingming
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제10권2호
    • /
    • pp.703-722
    • /
    • 2016
  • Passive wireless devices have increasing civilian and military applications, especially in the scenario with wearable devices and Internet of Things. In this paper, we study indoor localization of a target equipped with radio-frequency identification (RFID) device in ultra-wideband (UWB) wireless networks. With known room layout, deterministic multipath components, including the line-of-sight (LOS) signal and the reflected signals via multipath propagation, are employed to locate the target with one transmitter and a single inaccurate receiver. A factor graph corresponding to the joint posterior position distribution of target and receiver is constructed. However, due to the mixed distribution in the factor node of likelihood function, the expressions of messages are intractable by directly applying belief propagation on factor graph. To this end, we approximate the messages by Gaussian distribution via minimizing the Kullback-Leibler divergence (KLD) between them. Accordingly, a parametric message passing algorithm for indoor passive localization is derived, in which only the means and variances of Gaussian distributions have to be updated. Performance of the proposed algorithm and the impact of critical parameters are evaluated by Monte Carlo simulations, which demonstrate the superior performance in localization accuracy and the robustness to the statistics of multipath channels.

Distributed Target Localization with Inaccurate Collaborative Sensors in Multipath Environments

  • Feng, Yuan;Yan, Qinsiwei;Tseng, Po-Hsuan;Hao, Ganlin;Wu, Nan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제13권5호
    • /
    • pp.2299-2318
    • /
    • 2019
  • Location-aware networks are of great importance for both civil lives and military applications. Methods based on line-of-sight (LOS) measurements suffer sever performance loss in harsh environments such as indoor scenarios, where sensors can receive both LOS and non-line-of-sight (NLOS) measurements. In this paper, we propose a data association (DA) process based on the expectation maximization (EM) algorithm, which enables us to exploit multipath components (MPCs). By setting the mapping relationship between the measurements and scatters as a latent variable, coefficients of the Gaussian mixture model are estimated. Moreover, considering the misalignment of sensor position, we propose a space-alternating generalized expectation maximization (SAGE)-based algorithms to jointly update the target localization and sensor position information. A two dimensional (2-D) circularly symmetric Gaussian distribution is employed to approximate the probability density function of the sensor's position uncertainty via the minimization of the Kullback-Leibler divergence (KLD), which enables us to calculate the expectation step with low computational complexity. Moreover, a distributed implementation is derived based on the average consensus method to improve the scalability of the proposed algorithm. Simulation results demonstrate that the proposed centralized and distributed algorithms can perform close to the Monte Carlo-based method with much lower communication overhead and computational complexity.

Discrimination of Out-of-Control Condition Using AIC in (x, s) Control Chart

  • Takemoto, Yasuhiko;Arizono, Ikuo;Satoh, Takanori
    • Industrial Engineering and Management Systems
    • /
    • 제12권2호
    • /
    • pp.112-117
    • /
    • 2013
  • The $\overline{x}$ control chart for the process mean and either the R or s control chart for the process dispersion have been used together to monitor the manufacturing processes. However, it has been pointed out that this procedure is flawed by a fault that makes it difficult to capture the behavior of process condition visually by considering the relationship between the shift in the process mean and the change in the process dispersion because the respective characteristics are monitored by an individual control chart in parallel. Then, the ($\overline{x}$, s) control chart has been proposed to enable the process managers to monitor the changes in the process mean, process dispersion, or both. On the one hand, identifying which process parameters are responsible for out-of-control condition of process is one of the important issues in the process management. It is especially important in the ($\overline{x}$, s) control chart where some parameters are monitored at a single plane. The previous literature has proposed the multiple decision method based on the statistical hypothesis tests to identify the parameters responsible for out-of-control condition. In this paper, we propose how to identify parameters responsible for out-of-control condition using the information criterion. Then, the effectiveness of proposed method is shown through some numerical experiments.

Automatic Detection of Texture-defects using Texture-periodicity and Jensen-Shannon Divergence

  • Asha, V.;Bhajantri, N.U.;Nagabhushan, P.
    • Journal of Information Processing Systems
    • /
    • 제8권2호
    • /
    • pp.359-374
    • /
    • 2012
  • In this paper, we propose a new machine vision algorithm for automatic defect detection on patterned textures with the help of texture-periodicity and the Jensen-Shannon Divergence, which is a symmetrized and smoothed version of the Kullback-Leibler Divergence. Input defective images are split into several blocks of the same size as the size of the periodic unit of the image. Based on histograms of the periodic blocks, Jensen-Shannon Divergence measures are calculated for each periodic block with respect to itself and all other periodic blocks and a dissimilarity matrix is obtained. This dissimilarity matrix is utilized to get a matrix of true-metrics, which is later subjected to Ward's hierarchical clustering to automatically identify defective and defect-free blocks. Results from experiments on real fabric images belonging to 3 major wallpaper groups, namely, pmm, p2, and p4m with defects, show that the proposed method is robust in finding fabric defects with a very high success rates without any human intervention.

Secure and Robust Clustering for Quantized Target Tracking in Wireless Sensor Networks

  • Mansouri, Majdi;Khoukhi, Lyes;Nounou, Hazem;Nounou, Mohamed
    • Journal of Communications and Networks
    • /
    • 제15권2호
    • /
    • pp.164-172
    • /
    • 2013
  • We consider the problem of secure and robust clustering for quantized target tracking in wireless sensor networks (WSN) where the observed system is assumed to evolve according to a probabilistic state space model. We propose a new method for jointly activating the best group of candidate sensors that participate in data aggregation, detecting the malicious sensors and estimating the target position. Firstly, we select the appropriate group in order to balance the energy dissipation and to provide the required data of the target in the WSN. This selection is also based on the transmission power between a sensor node and a cluster head. Secondly, we detect the malicious sensor nodes based on the information relevance of their measurements. Then, we estimate the target position using quantized variational filtering (QVF) algorithm. The selection of the candidate sensors group is based on multi-criteria function, which is computed by using the predicted target position provided by the QVF algorithm, while the malicious sensor nodes detection is based on Kullback-Leibler distance between the current target position distribution and the predicted sensor observation. The performance of the proposed method is validated by simulation results in target tracking for WSN.

변형 가능한 컨볼루션 네트워크와 지식증류 기반 위성 영상 빌딩 분할 (Satellite Building Segmentation using Deformable Convolution and Knowledge Distillation)

  • 최근훈;이응빈;최병인;이태영;안종식;손광훈
    • 한국멀티미디어학회논문지
    • /
    • 제25권7호
    • /
    • pp.895-902
    • /
    • 2022
  • Building segmentation using satellite imagery such as EO (Electro-Optical) and SAR (Synthetic-Aperture Radar) images are widely used due to their various uses. EO images have the advantage of having color information, and they are noise-free. In contrast, SAR images can identify the physical characteristics and geometrical information that the EO image cannot capture. This paper proposes a learning framework for efficient building segmentation that consists of a teacher-student-based privileged knowledge distillation and deformable convolution block. The teacher network utilizes EO and SAR images simultaneously to produce richer features and provide them to the student network, while the student network only uses EO images. To do this, we present objective functions that consist of Kullback-Leibler divergence loss and knowledge distillation loss. Furthermore, we introduce deformable convolution to avoid pixel-level noise and efficiently capture hard samples such as small and thin buildings at the global level. Experimental result shows that our method outperforms other methods and efficiently captures complex samples such as a small or narrow building. Moreover, Since our method can be applied to various methods.

A Study on Particle Filter based on KLD-Resampling for Wireless Patient Tracking

  • Ly-Tu, Nga;Le-Tien, Thuong;Mai, Linh
    • Industrial Engineering and Management Systems
    • /
    • 제16권1호
    • /
    • pp.92-102
    • /
    • 2017
  • In this paper, we consider a typical health care system via the help of Wireless Sensor Network (WSN) for wireless patient tracking. The wireless patient tracking module of this system performs localization out of samples of Received Signal Strength (RSS) variations and tracking through a Particle Filter (PF) for WSN assisted by multiple transmit-power information. We propose a modified PF, Kullback-Leibler Distance (KLD)-resampling PF, to ameliorate the effect of RSS variations by generating a sample set near the high-likelihood region for improving the wireless patient tracking. The key idea of this method is to approximate a discrete distribution with an upper bound error on the KLD for reducing both location error and the number of particles used. To determine this bound error, an optimal algorithm is proposed based on the maximum gap error between the proposal and Sampling Important Resampling (SIR) algorithms. By setting up these values, a number of simulations using the health care system's data sets which contains the real RSSI measurements to evaluate the location error in term of various power levels and density nodes for all methods. Finally, we point out the effect of different power levels vs. different density nodes for the wireless patient tracking.

영어 동사의 의미적 유사도와 논항 선택 사이의 연관성 : ICE-GB와 WordNet을 이용한 통계적 검증 (The Strength of the Relationship between Semantic Similarity and the Subcategorization Frames of the English Verbs: a Stochastic Test based on the ICE-GB and WordNet)

  • 송상헌;최재웅
    • 한국언어정보학회지:언어와정보
    • /
    • 제14권1호
    • /
    • pp.113-144
    • /
    • 2010
  • The primary goal of this paper is to find a feasible way to answer the question: Does the similarity in meaning between verbs relate to the similarity in their subcategorization? In order to answer this question in a rather concrete way on the basis of a large set of English verbs, this study made use of various language resources, tools, and statistical methodologies. We first compiled a list of 678 verbs that were selected from the most and second most frequent word lists from the Colins Cobuild English Dictionary, which also appeared in WordNet 3.0. We calculated similarity measures between all the pairs of the words based on the 'jcn' algorithm (Jiang and Conrath, 1997) implemented in the WordNet::Similarity module (Pedersen, Patwardhan, and Michelizzi, 2004). The clustering process followed, first building similarity matrices out of the similarity measure values, next drawing dendrograms on the basis of the matricies, then finally getting 177 meaningful clusters (covering 437 verbs) that passed a certain level set by z-score. The subcategorization frames and their frequency values were taken from the ICE-GB. In order to calculate the Selectional Preference Strength (SPS) of the relationship between a verb and its subcategorizations, we relied on the Kullback-Leibler Divergence model (Resnik, 1996). The SPS values of the verbs in the same cluster were compared with each other, which served to give the statistical values that indicate how much the SPS values overlap between the subcategorization frames of the verbs. Our final analysis shows that the degree of overlap, or the relationship between semantic similarity and the subcategorization frames of the verbs in English, is equally spread out from the 'very strongly related' to the 'very weakly related'. Some semantically similar verbs share a lot in terms of their subcategorization frames, and some others indicate an average degree of strength in the relationship, while the others, though still semantically similar, tend to share little in their subcategorization frames.

  • PDF

혼합 교차-엔트로피 알고리즘을 활용한 다수 에이전트-다수 작업 할당 문제 (Multi Agents-Multi Tasks Assignment Problem using Hybrid Cross-Entropy Algorithm)

  • 김광
    • 한국산업정보학회논문지
    • /
    • 제27권4호
    • /
    • pp.37-45
    • /
    • 2022
  • 본 논문에서는 대표적인 조합 최적화(combinatorial optimization) 문제인 다수 에이전트-다수 작업 할당 문제를 제시한다. 할당 문제의 목적은 각 작업의 달성률(achievement rate)의 합을 최대로 하는 에이전트-작업 할당을 결정하는 것이다. 달성률은 각 작업의 할당된 에이전트의 수에 따라 아래 오목 증가(concave down increasing)형태로 다루어지며, 본 할당 문제는 비선형(non-linearity)의 목적함수를 갖는 NP-난해(NP-hard) 문제로 표현된다. 본 논문에서는 할당 문제를 해결하기 위한 효과적이면서 효율적인 문제 해결 방법론으로 혼합 교차-엔트로피 알고리즘(hybrid cross-entropy algorithm)을 제안한다. 일반적인 교차-엔트로피 알고리즘은 문제 상황에 따라 느린 매개변수 업데이트 속도와 조기수렴(premature convergence)이 발생할 수 있다. 본 연구에서 제안하는 문제 해결 방법론은 이러한 단점의 발생 확률을 낮추도록 설계되었으며, 실험적으로도 우수한 성능을 보이는 알고리즘임을 수치실험을 통해 제시한다.