• Title/Summary/Keyword: Gaussian measure

Search Result 186, Processing Time 0.028 seconds

Centroid Neural Network with Bhattacharyya Kernel (Bhattacharyya 커널을 적용한 Centroid Neural Network)

  • Lee, Song-Jae;Park, Dong-Chul
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.32 no.9C
    • /
    • pp.861-866
    • /
    • 2007
  • A clustering algorithm for Gaussian Probability Distribution Function (GPDF) data called Centroid Neural Network with a Bhattacharyya Kernel (BK-CNN) is proposed in this paper. The proposed BK-CNN is based on the unsupervised competitive Centroid Neural Network (CNN) and employs a kernel method for data projection. The kernel method adopted in the proposed BK-CNN is used to project data from the low dimensional input feature space into higher dimensional feature space so as the nonlinear problems associated with input space can be solved linearly in the feature space. In order to cluster the GPDF data, the Bhattacharyya kernel is used to measure the distance between two probability distributions for data projection. With the incorporation of the kernel method, the proposed BK-CNN is capable of dealing with nonlinear separation boundaries and can successfully allocate more code vector in the region that GPDF data are densely distributed. When applied to GPDF data in an image classification probleml, the experiment results show that the proposed BK-CNN algorithm gives 1.7%-4.3% improvements in average classification accuracy over other conventional algorithm such as k-means, Self-Organizing Map (SOM) and CNN algorithms with a Bhattacharyya distance, classed as Bk-Means, B-SOM, B-CNN algorithms.

Depth From Defocus using Wavelet Transform (웨이블릿 변환을 이용한 Depth From Defocus)

  • Choi, Chang-Min;Choi, Tae-Sun
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.42 no.5 s.305
    • /
    • pp.19-26
    • /
    • 2005
  • In this paper, a new method for obtaining three-dimensional shape of an object by measuring relative blur between images using wavelet analysis has been described. Most of the previous methods use inverse filtering to determine the measure of defocus. These methods suffer from some fundamental problems like inaccuracies in finding the frequency domain representation, windowing effects, and border effects. Besides these deficiencies, a filter, such as Laplacian of Gaussian, that produces an aggregate estimate of defocus for an unknown texture, can not lead to accurate depth estimates because of the non-stationary nature of images. We propose a new depth from defocus (DFD) method using wavelet analysis that is capable of performing both the local analysis and the windowing technique with variable-sized regions for non-stationary images with complex textural properties. We show that normalized image ratio of wavelet power by Parseval's theorem is closely related to blur parameter and depth. Experimental results have been presented demonstrating that our DFD method is faster in speed and gives more precise shape estimates than previous DFD techniques for both synthetic and real scenes.

Maximum Canopy Height Estimation Using ICESat GLAS Laser Altimetry

  • Park, Tae-Jin;Lee, Woo-Kyun;Lee, Jong-Yeol;Hayashi, Masato;Tang, Yanhong;Kwak, Doo-Ahn;Kwak, Han-Bin;Kim, Moon-Il;Cui, Guishan;Nam, Ki-Jun
    • Korean Journal of Remote Sensing
    • /
    • v.28 no.3
    • /
    • pp.307-318
    • /
    • 2012
  • To understand forest structures, the Geoscience Laser Altimeter System (GLAS) instrument have been employed to measure and monitor forest canopy with feasibility of acquiring three dimensional canopy structure information. This study tried to examine the potential of GLAS dataset in measuring forest canopy structures, particularly maximum canopy height estimation. To estimate maximum canopy height using feasible GLAS dataset, we simply used difference between signal start and ground peak derived from Gaussian decomposition method. After estimation procedure, maximum canopy height was derived from airborne Light Detection and Ranging (LiDAR) data and it was applied to evaluate the accuracy of that of GLAS estimation. In addition, several influences, such as topographical and biophysical factors, were analyzed and discussed to explain error sources of direct maximum canopy height estimation using GLAS data. In the result of estimation using direct method, a root mean square error (RMSE) was estimated at 8.15 m. The estimation tended to be overestimated when comparing to derivations of airborne LiDAR. According to the result of error occurrences analysis, we need to consider these error sources, particularly terrain slope within GLAS footprint, and to apply statistical regression approach based on various parameters from a Gaussian decomposition for accurate and reliable maximum canopy height estimation.

Image Restoration of Remote Sensing High Resolution Imagery Using Point-Jacobian Iterative MAP Estimation (Point-Jacobian 반복 MAP 추정을 이용한 고해상도 영상복원)

  • Lee, Sang-Hoon
    • Korean Journal of Remote Sensing
    • /
    • v.30 no.6
    • /
    • pp.817-827
    • /
    • 2014
  • In the satellite remote sensing, the operational environment of the satellite sensor causes image degradation during the image acquisition. The degradation results in noise and blurring which badly affect identification and extraction of useful information in image data. This study proposes a maximum a posteriori (MAP) estimation using Point-Jacobian iteration to restore a degraded image. The proposed method assumes a Gaussian additive noise and Markov random field of spatial continuity. The proposed method employs a neighbor window of spoke type which is composed of 8 line windows at the 8 directions, and a boundary adjacency measure of Mahalanobis square distance between center and neighbor pixels. For the evaluation of the proposed method, a pixel-wise classification was used for simulation data using various patterns similar to the structure exhibited in high resolution imagery and an unsupervised segmentation for the remotely-sensed image data of 1 mspatial resolution observed over the north area of Anyang in Korean peninsula. The experimental results imply that it can improve analytical accuracy in the application of remote sensing high resolution imagery.

Value at Risk with Peaks over Threshold: Comparison Study of Parameter Estimation (Peacks over threshold를 이용한 Value at Risk: 모수추정 방법론의 비교)

  • Kang, Minjung;Kim, Jiyeon;Song, Jongwoo;Song, Seongjoo
    • The Korean Journal of Applied Statistics
    • /
    • v.26 no.3
    • /
    • pp.483-494
    • /
    • 2013
  • The importance of financial risk management has been highlighted after several recent incidences of global financial crisis. One of the issues in financial risk management is how to measure the risk; currently, the most widely used risk measure is the Value at Risk(VaR). We can consider to estimate VaR using extreme value theory if the financial data have heavy tails as the recent market trend. In this paper, we study estimations of VaR using Peaks over Threshold(POT), which is a common method of modeling fat-tailed data using extreme value theory. To use POT, we first estimate parameters of the Generalized Pareto Distribution(GPD). Here, we compare three different methods of estimating parameters of GPD by comparing the performance of the estimated VaR based on KOSPI 5 minute-data. In addition, we simulate data from normal inverse Gaussian distributions and examine two parameter estimation methods of GPD. We find that the recent methods of parameter estimation of GPD work better than the maximum likelihood estimation when the kurtosis of the return distribution of KOSPI is very high and the simulation experiment shows similar results.

An Efficient CT Image Denoising using WT-GAN Model

  • Hae Chan Jeong;Dong Hoon Lim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.5
    • /
    • pp.21-29
    • /
    • 2024
  • Reducing the radiation dose during CT scanning can lower the risk of radiation exposure, but not only does the image resolution significantly deteriorate, but the effectiveness of diagnosis is reduced due to the generation of noise. Therefore, noise removal from CT images is a very important and essential processing process in the image restoration. Until now, there are limitations in removing only the noise by separating the noise and the original signal in the image area. In this paper, we aim to effectively remove noise from CT images using the wavelet transform-based GAN model, that is, the WT-GAN model in the frequency domain. The GAN model used here generates images with noise removed through a U-Net structured generator and a PatchGAN structured discriminator. To evaluate the performance of the WT-GAN model proposed in this paper, experiments were conducted on CT images damaged by various noises, namely Gaussian noise, Poisson noise, and speckle noise. As a result of the performance experiment, the WT-GAN model is better than the traditional filter, that is, the BM3D filter, as well as the existing deep learning models, such as DnCNN, CDAE model, and U-Net GAN model, in qualitative and quantitative measures, that is, PSNR (Peak Signal-to-Noise Ratio) and SSIM (Structural Similarity Index Measure) showed excellent results.

A Hippocampus Segmentation in Brain MR Images using Level-Set Method (레벨 셋 방법을 이용한 뇌 MR 영상에서 해마영역 분할)

  • Lee, Young-Seung;Choi, Heung-Kook
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.9
    • /
    • pp.1075-1085
    • /
    • 2012
  • In clinical research using medical images, the image segmentation is one of the most important processes. Especially, the hippocampal atrophy is helpful for the clinical Alzheimer diagnosis as a specific marker of the progress of Alzheimer. In order to measure hippocampus volume exactly, segmentation of the hippocampus is essential. However, the hippocampus has some features like relatively low contrast, low signal-to-noise ratio, discreted boundary in MRI images, and these features make it difficult to segment hippocampus. To solve this problem, firstly, We selected region of interest from an experiment image, subtracted a original image from the negative image of the original image, enhanced contrast, and applied anisotropic diffusion filtering and gaussian filtering as preprocessing. Finally, We performed an image segmentation using two level set methods. Through a variety of approaches for the validation of proposed hippocampus segmentation method, We confirmed that our proposed method improved the rate and accuracy of the segmentation. Consequently, the proposed method is suitable for segmentation of the area which has similar features with the hippocampus. We believe that our method has great potential if successfully combined with other research findings.

Improvement of Keyword Spotting Performance Using Normalized Confidence Measure (정규화 신뢰도를 이용한 핵심어 검출 성능향상)

  • Kim, Cheol;Lee, Kyoung-Rok;Kim, Jin-Young;Choi, Seung-Ho;Choi, Seung-Ho
    • The Journal of the Acoustical Society of Korea
    • /
    • v.21 no.4
    • /
    • pp.380-386
    • /
    • 2002
  • Conventional post-processing as like confidence measure (CM) proposed by Rahim calculates phones' CM using the likelihood between phoneme model and anti-model, and then word's CM is obtained by averaging phone-level CMs[1]. In conventional method, CMs of some specific keywords are tory low and they are usually rejected. The reason is that statistics of phone-level CMs are not consistent. In other words, phone-level CMs have different probability density functions (pdf) for each phone, especially sri-phone. To overcome this problem, in this paper, we propose normalized confidence measure. Our approach is to transform CM pdf of each tri-phone to the same pdf under the assumption that CM pdfs are Gaussian. For evaluating our method we use common keyword spotting system. In that system context-dependent HMM models are used for modeling keyword utterance and contort-independent HMM models are applied to non-keyword utterance. The experiment results show that the proposed NCM reduced FAR (false alarm rate) from 0.44 to 0.33 FA/KW/HR (false alarm/keyword/hour) when MDR is about 8%. It achieves 25% improvement of FAR.

Terrain Slope Estimation Methods Using the Least Squares Approach for Terrain Referenced Navigation

  • Mok, Sung-Hoon;Bang, Hyochoong
    • International Journal of Aeronautical and Space Sciences
    • /
    • v.14 no.1
    • /
    • pp.85-90
    • /
    • 2013
  • This paper presents a study on terrain referenced navigation (TRN). The extended Kalman filter (EKF) is adopted as a filter method. A Jacobian matrix of measurement equations in the EKF consists of terrain slope terms, and accurate slope estimation is essential to keep filter stability. Two slope estimation methods are proposed in this study. Both methods are based on the least-squares approach. One is planar regression searching the best plane, in the least-squares sense, representing the terrain map over the region, determined by position error covariance. It is shown that the method could provide a more accurate solution than the previously developed linear regression approach, which uses lines rather than a plane in the least-squares measure. The other proposed method is weighted planar regression. Additional weights formed by Gaussian pdf are multiplied in the planar regression, to reflect the actual pdf of the position estimate of EKF. Monte Carlo simulations are conducted, to compare the performance between the previous and two proposed methods, by analyzing the filter properties of divergence probability and convergence speed. It is expected that one of the slope estimation methods could be implemented, after determining which of the filter properties is more significant at each mission.

Outflows in Sodium Excess Objects

  • Park, Jongwon;Jeong, Hyunjin;Yi, Sukyoung K.
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.40 no.1
    • /
    • pp.48.2-48.2
    • /
    • 2015
  • van Dokkum and Conroy revisited the strong Na I lines at $8200{\AA}$ found in some giant elliptical galaxies and interpreted it as evidence for bottom-heavy initial mass function. Jeong et al. later found a lot of galaxies showing strong Na D doublet absorption line at $5900{\AA}$ (Na D excess objects; a.k.a. NEOs) and showed that their origins can be different for different types of galaxies. While the excess in Na D seems related with interstellar medium in late-type galaxies, smooth-looking early-type NEOs suggest no compelling sign of ISM contributions. To test this finding, we measured doppler shift in the Na D line. We hypothesized that ISM is more likely to show blueshift due to outflow caused by either star formation or AGN activities. In order to measure the doppler shift, we tried both Gaussian and Voigt functions to fit each galaxy spectrum near the Na D line. We found that Voigt profiles reproduce the shapes of the Na D lines markedly better. Many of late-type NEOs clearly show blueshift in their Na D lines, which is consistent with the former interpretation that the Na D excess found in them is related with star formation-caused gas outflow. On the contrary, early-type NEOs do not show any notable doppler component, which is also consistent with the interpretation of Jeong et al. that the Na D excess in early-type NEOs is likely not related with ISM activities but purely stellar in origin.

  • PDF