• Title/Summary/Keyword: Expectation and Maximization

Search Result 215, Processing Time 0.026 seconds

Tsunami-induced Change Detection Using SAR Intensity and Texture Information Based on the Generalized Gaussian Mixture Model

  • Jung, Min-young;Kim, Yong-il
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.34 no.2
    • /
    • pp.195-206
    • /
    • 2016
  • The remote sensing technique using SAR data have many advantages when applied to the disaster site due to its wide coverage and all-weather acquisition availability. Although a single-pol (polarimetric) SAR image cannot represent the land surface better than a quad-pol SAR image can, single-pol SAR data are worth using for disaster-induced change detection. In this paper, an automatic change detection method based on a mixture of GGDs (generalized Gaussian distribution) is proposed, and usability of the textural features and intensity is evaluated by using the proposed method. Three ALOS/PALSAR images were used in the experiments, and the study site was Norita City, which was affected by the 2011 Tohoku earthquake. The experiment results showed that the proposed automatic change detection method is practical for disaster sites where the large areas change. The intensity information is useful for detecting disaster-induced changes with a 68.3% g-mean, but the texture information is not. The autocorrelation and correlation show the interesting implication that they tend not to extract agricultural areas in the change detection map. Therefore, the final tsunami-induced change map is produced by the combination of three maps: one is derived from the intensity information and used as an initial map, and the others are derived from the textural information and used as auxiliary data.

Segmenting Inpatients by Mixture Model and Analytical Hierarchical Process(AHP) Approach In Medical Service (의료서비스에서 혼합모형(Mixture model) 및 분석적 계층과정(AHP)를 이용한 입원환자의 시장세분화에 관한 연구)

  • 백수경;곽영식
    • Health Policy and Management
    • /
    • v.12 no.2
    • /
    • pp.1-22
    • /
    • 2002
  • Since the early 1980s scholars have applied latent structure and other type of finite mixture models from various academic fields. Although the merits of finite mixture model are well documented, the attempt to apply the mixture model to medical service has been relatively rare. The researchers aim to try to fill this gap by introducing finite mixture model and segmenting inpatients DB from one general hospital. In section 2 finite mixture models are compared with clustering, chi-square analysis, and discriminant analysis based on Wedel and Kamakura(2000)'s segmentation methodology schemata. The mixture model shows the optimal segments number and fuzzy classification for each observation by EM(expectation-maximization algorism). The finite mixture model is to unfix the sample, to Identify the groups, and to estimate the parameters of the density function underlying the observed data within each group. In section 3 and 4 we illustrate results of segmenting 4510 patients data including menial and ratio scales. And then, we show AHP can be identify the attractiveness of each segment, in which the decision maker can select the best target segment.

Impact of aperture-thickness on the real-time imaging characteristics of coded-aperture gamma cameras

  • Park, Seoryeong;Boo, Jiwhan;Hammig, Mark;Jeong, Manhee
    • Nuclear Engineering and Technology
    • /
    • v.53 no.4
    • /
    • pp.1266-1276
    • /
    • 2021
  • The mask parameters of a coded aperture are critical design features when optimizing the performance of a gamma-ray camera. In this paper, experiments and Monte Carlo simulations were performed to derive the minimum detectable activity (MDA) when one seeks a real-time imaging capability. First, the impact of the thickness of the modified uniformly redundant array (MURA) mask on the image quality is quantified, and the imaging of point, line, and surface radiation sources is demonstrated using both cross-correlation (CC) and maximum likelihood expectation maximization (MLEM) methods. Second, the minimum detectable activity is also derived for real-time imaging by altering the factors used in the image quality assessment, consisting of the peak-to-noise ratio (PSNR), the normalized mean square error (NMSE), the spatial resolution (full width at half maximum; FWHM), and the structural similarity (SSIM), all evaluated as a function of energy and mask thickness. Sufficiently sharp images were reconstructed when the mask thickness was approximately 2 cm for a source energy between 30 keV and 1.5 MeV and the minimum detectable activity for real-time imaging was 23.7 MBq at 1 m distance for a 1 s collection time.

Effect of filters and reconstruction method on Cu-64 PET image

  • Lee, Seonhwa;Kim, Jung min;Kim, Jung Young;Kim, Jin Su
    • Journal of Radiopharmaceuticals and Molecular Probes
    • /
    • v.3 no.2
    • /
    • pp.65-71
    • /
    • 2017
  • To assess the effects of filter and reconstruction of Cu-64 PET data on Siemens scanner, the various reconstruction algorithm with various filters were assessed in terms of spatial resolution, non-uniformity (NU), recovery coefficient (RC), and spillover ratio (SOR). Image reconstruction was performed using filtered backprojection (FBP), 2D ordered subset expectation maximization (OSEM), 3D reprojection algorithm (3DRP), and maximum a posteriori algorithms (MAP). For the FBP reconstruction, ramp, butterworth, hamming, hanning, or parzen filters were used. Attenuation or scatter correction were performed to assess the effect of attenuation and scatter correction. Regarding spatial resolution, highest achievable volumetric resolution was $3.08mm^3$ at the center of FOV when MAP (${\beta}=0.1$) reconstruction method was used. SOR was below 4% for FBP when ramp, Hamming, Hanning, or Shepp-logan filter were used. The lowest NU (highest uniform) after attenuation & scatter correction was 5.39% when FBP (parzen filter) was used. Regarding RC, 0.9 < RC < 1.1 was obtained when OSEM (iteration: 10) was used when attenuation and scatter correction were applied. In this study, image quality of Cu-64 on Siemens Inveon PET was investigated. This data will helpful for the quantification of Cu-64 PET data.

A Study on Emotion Recognition Systems based on the Probabilistic Relational Model Between Facial Expressions and Physiological Responses (생리적 내재반응 및 얼굴표정 간 확률 관계 모델 기반의 감정인식 시스템에 관한 연구)

  • Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.6
    • /
    • pp.513-519
    • /
    • 2013
  • The current vision-based approaches for emotion recognition, such as facial expression analysis, have many technical limitations in real circumstances, and are not suitable for applications that use them solely in practical environments. In this paper, we propose an approach for emotion recognition by combining extrinsic representations and intrinsic activities among the natural responses of humans which are given specific imuli for inducing emotional states. The intrinsic activities can be used to compensate the uncertainty of extrinsic representations of emotional states. This combination is done by using PRMs (Probabilistic Relational Models) which are extent version of bayesian networks and are learned by greedy-search algorithms and expectation-maximization algorithms. Previous research of facial expression-related extrinsic emotion features and physiological signal-based intrinsic emotion features are combined into the attributes of the PRMs in the emotion recognition domain. The maximum likelihood estimation with the given dependency structure and estimated parameter set is used to classify the label of the target emotional states.

Study on the PET image quality according to various scintillation detectors based on the Monte Carlo simulation

  • Eunsoo Kim;Chanrok Park
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.27 no.2
    • /
    • pp.129-132
    • /
    • 2023
  • Purpose: Positron emisson tomography (PET) is a crucial medical imaging scanner for the detection of cancer lesions. In order to maintain the improved image quality, it is crucial to apply detectors of superior performance. Therefore, the purpose of this study was to compare PET image quality using Monte Carlo simulation based on the detector materials of BGO, LSO, and LuAP. Materials and Methods: The Geant4 Application for Tomographic Emission (GATE) was used to design the PET detector. Scintillations with BGO, LSO and LuAP were modelled, with a size of 3.95 × 5.3 mm2 (width × height) and 25.0 mm (thickness). The PET detector consisted of 34 blocks per ring and a total of 4 rings. A line source of 1 MBq was modelled and acquired with a radius of 1 mm and length of 20 mm for 20 seconds. The acquired image was reconstructed maximum likelihood expectation maximization with 2 iteration and 10 subsets. The count comparison was carried out. Results and Discussion: The highest true, random, and scatter counts were obtained from the BGO scintillation detector compared to LSO and LuAP. Conclusion: The BGO scintillation detector material indicated excellent performance in terms of detection of gamma rays from emitted PET phantom.

Multi-Level Segmentation of Infrared Images with Region of Interest Extraction

  • Yeom, Seokwon
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.16 no.4
    • /
    • pp.246-253
    • /
    • 2016
  • Infrared (IR) imaging has been researched for various applications such as surveillance. IR radiation has the capability to detect thermal characteristics of objects under low-light conditions. However, automatic segmentation for finding the object of interest would be challenging since the IR detector often provides the low spatial and contrast resolution image without color and texture information. Another hindrance is that the image can be degraded by noise and clutters. This paper proposes multi-level segmentation for extracting regions of interest (ROIs) and objects of interest (OOIs) in the IR scene. Each level of the multi-level segmentation is composed of a k-means clustering algorithm, an expectation-maximization (EM) algorithm, and a decision process. The k-means clustering initializes the parameters of the Gaussian mixture model (GMM), and the EM algorithm estimates those parameters iteratively. During the multi-level segmentation, the area extracted at one level becomes the input to the next level segmentation. Thus, the segmentation is consecutively performed narrowing the area to be processed. The foreground objects are individually extracted from the final ROI windows. In the experiments, the effectiveness of the proposed method is demonstrated using several IR images, in which human subjects are captured at a long distance. The average probability of error is shown to be lower than that obtained from other conventional methods such as Gonzalez, Otsu, k-means, and EM methods.

The inference and estimation for latent discrete outcomes with a small sample

  • Choi, Hyung;Chung, Hwan
    • Communications for Statistical Applications and Methods
    • /
    • v.23 no.2
    • /
    • pp.131-146
    • /
    • 2016
  • In research on behavioral studies, significant attention has been paid to the stage-sequential process for longitudinal data. Latent class profile analysis (LCPA) is an useful method to study sequential patterns of the behavioral development by the two-step identification process: identifying a small number of latent classes at each measurement occasion and two or more homogeneous subgroups in which individuals exhibit a similar sequence of latent class membership over time. Maximum likelihood (ML) estimates for LCPA are easily obtained by expectation-maximization (EM) algorithm, and Bayesian inference can be implemented via Markov chain Monte Carlo (MCMC). However, unusual properties in the likelihood of LCPA can cause difficulties in ML and Bayesian inference as well as estimation in small samples. This article describes and addresses erratic problems that involve conventional ML and Bayesian estimates for LCPA with small samples. We argue that these problems can be alleviated with a small amount of prior input. This study evaluates the performance of likelihood and MCMC-based estimates with the proposed prior in drawing inference over repeated sampling. Our simulation shows that estimates from the proposed methods perform better than those from the conventional ML and Bayesian method.

Reliability Modeling and Analysis for a Unit with Multiple Causes of Failure (다수의 고장 원인을 갖는 기기의 신뢰성 모형화 및 분석)

  • Baek, Sang-Yeop;Lim, Tae-Jin;Lie, Chang-Hoon
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.21 no.4
    • /
    • pp.609-628
    • /
    • 1995
  • This paper presents a reliability model and a data-analytic procedure for a repairable unit subject to failures due to multiple non-identifiable causes. We regard a failure cause as a state and assume the life distribution for each cause to be exponential. Then we represent the dependency among the causes by a Markov switching model(MSM) and estimate the transition probabilities and failure rates by maximum likelihood(ML) method. The failure data are incomplete due to masked causes of failures. We propose a specific version of EM(expectation and maximization) algorithm for finding maximum likelihood estimator(MLE) under this situation. We also develop statistical procedures for determining the number of significant states and for testing independency between state transitions. Our model requires only the successive failure times of a unit to perform the statistical analysis. It works well even when the causes of failures are fully masked, which overcomes the major deficiency of competing risk models. It does not require the assumption of stationarity or independency which is essential in mixture models. The stationary probabilities of states can be easily calculated from the transition probabilities estimated in our model, so it covers mixture models in general. The results of simulations show the consistency of estimation and accuracy gradually increasing according to the difference of failure rates and the frequency of transitions among the states.

  • PDF

Experimental study of noise level optimization in brain single-photon emission computed tomography images using non-local means approach with various reconstruction methods

  • Seong-Hyeon Kang;Seungwan Lee;Youngjin Lee
    • Nuclear Engineering and Technology
    • /
    • v.55 no.5
    • /
    • pp.1527-1532
    • /
    • 2023
  • The noise reduction algorithm using the non-local means (NLM) approach is very efficient in nuclear medicine imaging. In this study, the applicability of the NLM noise reduction algorithm in single-photon emission computed tomography (SPECT) images with a brain phantom and the optimization of the NLM algorithm by changing the smoothing factors according to various reconstruction methods are investigated. Brain phantom images were reconstructed using filtered back projection (FBP) and ordered subset expectation maximization (OSEM). The smoothing factor of the NLM noise reduction algorithm determined the optimal coefficient of variation (COV) and contrast-to-noise ratio (CNR) results at a value of 0.020 in the FBP and OSEM reconstruction methods. We confirmed that the FBP- and OSEM-based SPECT images using the algorithm applied with the optimal smoothing factor improved the COV and CNR by 66.94% and 8.00% on average, respectively, compared to those of the original image. In conclusion, an optimized smoothing factor was derived from the NLM approach-based algorithm in brain SPECT images and may be applicable to various nuclear medicine imaging techniques in the future.