• Title/Summary/Keyword: Markov chain monte carlo

Search Result 271, Processing Time 0.024 seconds

Enhancing the radar-based mean areal precipitation forecasts to improve urban flood predictions and uncertainty quantification

  • Nguyen, Duc Hai;Kwon, Hyun-Han;Yoon, Seong-Sim;Bae, Deg-Hyo
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2020.06a
    • /
    • pp.123-123
    • /
    • 2020
  • The present study is aimed to correcting radar-based mean areal precipitation forecasts to improve urban flood predictions and uncertainty analysis of water levels contributed at each stage in the process. For this reason, a long short-term memory (LSTM) network is used to reproduce three-hour mean areal precipitation (MAP) forecasts from the quantitative precipitation forecasts (QPFs) of the McGill Algorithm for Precipitation nowcasting by Lagrangian Extrapolation (MAPLE). The Gangnam urban catchment located in Seoul, South Korea, was selected as a case study for the purpose. A database was established based on 24 heavy rainfall events, 22 grid points from the MAPLE system and the observed MAP values estimated from five ground rain gauges of KMA Automatic Weather System. The corrected MAP forecasts were input into the developed coupled 1D/2D model to predict water levels and relevant inundation areas. The results indicate the viability of the proposed framework for generating three-hour MAP forecasts and urban flooding predictions. For the analysis uncertainty contributions of the source related to the process, the Bayesian Markov Chain Monte Carlo (MCMC) using delayed rejection and adaptive metropolis algorithm is applied. For this purpose, the uncertainty contributions of the stages such as QPE input, QPF MAP source LSTM-corrected source, and MAP input and the coupled model is discussed.

  • PDF

Gas dynamics and star formation in dwarf galaxies: the case of DDO 210

  • Oh, Se-Heon;Zheng, Yun;Wang, Jing
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.44 no.2
    • /
    • pp.75.4-75.4
    • /
    • 2019
  • We present a quantitative analysis of the relationship between the gas dynamics and star formation history of DDO 210 which is an irregular dwarf galaxy in the local Universe. We perform profile analysis of an high-resolution neutral hydrogen (HI) data cube of the galaxy taken with the large Very Large Array (VLA) survey, LITTLE THINGS using newly developed algorithm based on a Bayesian Markov Chain Monte Carlo (MCMC) technique. The complex HI structure and kinematics of the galaxy are decomposed into multiple kinematic components in a quantitative way like 1) bulk motions which are most likely to follow the underlying circular rotation of the disk, 2) non-circular motions deviating from the bulk motions, and 3) kinematically cold and warm components with narrower and wider velocity dispersion. The decomposed kinematic components are then spatially correlated with the distribution of stellar populations obtained from the color-magnitude diagram (CMD) fitting method. The cold and warm gas components show negative and positive correlations between their velocity dispersions and the surface star formation rates of the populations with ages of < 40 Myr and 100~400 Myr, respectively. The cold gas is most likely to be associated with the young stellar populations. Then the stellar feedback of the young populations could influence the warm gas. The age difference between the populations which show the correlations indicates the time delay of the stellar feedback.

  • PDF

A novel Metropolis-within-Gibbs sampler for Bayesian model updating using modal data based on dynamic reduction

  • Ayan Das;Raj Purohit Kiran;Sahil Bansal
    • Structural Engineering and Mechanics
    • /
    • v.87 no.1
    • /
    • pp.1-18
    • /
    • 2023
  • The paper presents a Bayesian Finite element (FE) model updating methodology by utilizing modal data. The dynamic condensation technique is adopted in this work to reduce the full system model to a smaller model version such that the degrees of freedom (DOFs) in the reduced model correspond to the observed DOFs, which facilitates the model updating procedure without any mode-matching. The present work considers both the MPV and the covariance matrix of the modal parameters as the modal data. Besides, the modal data identified from multiple setups is considered for the model updating procedure, keeping in view of the realistic scenario of inability of limited number of sensors to measure the response of all the interested DOFs of a large structure. A relationship is established between the modal data and structural parameters based on the eigensystem equation through the introduction of additional uncertain parameters in the form of modal frequencies and partial mode shapes. A novel sampling strategy known as the Metropolis-within-Gibbs (MWG) sampler is proposed to sample from the posterior Probability Density Function (PDF). The effectiveness of the proposed approach is demonstrated by considering both simulated and experimental examples.

Model-independent Constraints on Type Ia Supernova Light-curve Hyperparameters and Reconstructions of the Expansion History of the Universe

  • Koo, Hanwool;Shafieloo, Arman;Keeley, Ryan E.;L'Huillier, Benjamin
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.45 no.1
    • /
    • pp.48.4-49
    • /
    • 2020
  • We reconstruct the expansion history of the universe using type Ia supernovae (SN Ia) in a manner independent of any cosmological model assumptions. To do so, we implement a nonparametric iterative smoothing method on the Joint Light-curve Analysis (JLA) data while exploring the SN Ia light-curve hyperparameter space by Markov Chain Monte Carlo (MCMC) sampling. We test to see how the posteriors of these hyperparameters depend on cosmology, whether using different dark energy models or reconstructions shift these posteriors. Our constraints on the SN Ia light-curve hyperparameters from our model-independent analysis are very consistent with the constraints from using different parameterizations of the equation of state of dark energy, namely the flat ΛCDM cosmology, the Chevallier-Polarski-Linder model, and the Phenomenologically Emergent Dark Energy (PEDE) model. This implies that the distance moduli constructed from the JLA data are mostly independent of the cosmological models. We also studied that the possibility the light-curve parameters evolve with redshift and our results show consistency with no evolution. The reconstructed expansion history of the universe and dark energy properties also seem to be in good agreement with the expectations of the standard ΛCDM model. However, our results also indicate that the data still allow for considerable flexibility in the expansion history of the universe. This work is published in ApJ.

  • PDF

Bayesian model update for damage detection of a steel plate girder bridge

  • Xin Zhou;Feng-Liang Zhang;Yoshinao Goi;Chul-Woo Kim
    • Smart Structures and Systems
    • /
    • v.31 no.1
    • /
    • pp.29-43
    • /
    • 2023
  • This study investigates the possibility of damage detection of a real bridge by means of a modal parameter-based finite element (FE) model update. Field moving vehicle experiments were conducted on an actual steel plate girder bridge. In the damage experiment, cracks were applied to the bridge to simulate damage states. A fast Bayesian FFT method was employed to identify and quantify uncertainties of the modal parameters then these modal parameters were used in the Bayesian model update. Material properties and boundary conditions are taken as uncertainties and updated in the model update process. Observations showed that although some differences existed in the results obtained from different model classes, the discrepancy between modal parameters of the FE model and those experimentally obtained was reduced after the model update process, and the updated parameters in the numerical model were indeed affected by the damage. The importance of boundary conditions in the model updating process is also observed. The capability of the MCMC model update method for application to the actual bridge structure is assessed, and the limitation of FE model update in damage detection of bridges using only modal parameters is observed.

Uncertainty Assessment of Single Event Rainfall-Runoff Model Using Bayesian Model (Bayesian 모형을 이용한 단일사상 강우-유출 모형의 불확실성 분석)

  • Kwon, Hyun-Han;Kim, Jang-Gyeong;Lee, Jong-Seok;Na, Bong-Kil
    • Journal of Korea Water Resources Association
    • /
    • v.45 no.5
    • /
    • pp.505-516
    • /
    • 2012
  • The study applies a hydrologic simulation model, HEC-1 developed by Hydrologic Engineering Center to Daecheong dam watershed for modeling hourly inflows of Daecheong dam. Although the HEC-1 model provides an automatic optimization technique for some of the parameters, the built-in optimization model is not sufficient in estimating reliable parameters. In particular, the optimization model often fails to estimate the parameters when a large number of parameters exist. In this regard, a main objective of this study is to develop Bayesian Markov Chain Monte Carlo simulation based HEC-1 model (BHEC-1). The Clark IUH method for transformation of precipitation excess to runoff and the soil conservation service runoff curve method for abstractions were used in Bayesian Monte Carlo simulation. Simulations of runoff at the Daecheong station in the HEC-1 model under Bayesian optimization scheme allow the posterior probability distributions of the hydrograph thus providing uncertainties in rainfall-runoff process. The proposed model showed a powerful performance in terms of estimating model parameters and deriving full uncertainties so that the model can be applied to various hydrologic problems such as frequency curve derivation, dam risk analysis and climate change study.

Non-Simultaneous Sampling Deactivation during the Parameter Approximation of a Topic Model

  • Jeong, Young-Seob;Jin, Sou-Young;Choi, Ho-Jin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.1
    • /
    • pp.81-98
    • /
    • 2013
  • Since Probabilistic Latent Semantic Analysis (PLSA) and Latent Dirichlet Allocation (LDA) were introduced, many revised or extended topic models have appeared. Due to the intractable likelihood of these models, training any topic model requires to use some approximation algorithm such as variational approximation, Laplace approximation, or Markov chain Monte Carlo (MCMC). Although these approximation algorithms perform well, training a topic model is still computationally expensive given the large amount of data it requires. In this paper, we propose a new method, called non-simultaneous sampling deactivation, for efficient approximation of parameters in a topic model. While each random variable is normally sampled or obtained by a single predefined burn-in period in the traditional approximation algorithms, our new method is based on the observation that the random variable nodes in one topic model have all different periods of convergence. During the iterative approximation process, the proposed method allows each random variable node to be terminated or deactivated when it is converged. Therefore, compared to the traditional approximation ways in which usually every node is deactivated concurrently, the proposed method achieves the inference efficiency in terms of time and memory. We do not propose a new approximation algorithm, but a new process applicable to the existing approximation algorithms. Through experiments, we show the time and memory efficiency of the method, and discuss about the tradeoff between the efficiency of the approximation process and the parameter consistency.

Identifying Copy Number Variants under Selection in Geographically Structured Populations Based on F-statistics

  • Song, Hae-Hiang;Hu, Hae-Jin;Seok, In-Hae;Chung, Yeun-Jun
    • Genomics & Informatics
    • /
    • v.10 no.2
    • /
    • pp.81-87
    • /
    • 2012
  • Large-scale copy number variants (CNVs) in the human provide the raw material for delineating population differences, as natural selection may have affected at least some of the CNVs thus far discovered. Although the examination of relatively large numbers of specific ethnic groups has recently started in regard to inter-ethnic group differences in CNVs, identifying and understanding particular instances of natural selection have not been performed. The traditional $F_{ST}$ measure, obtained from differences in allele frequencies between populations, has been used to identify CNVs loci subject to geographically varying selection. Here, we review advances and the application of multinomial-Dirichlet likelihood methods of inference for identifying genome regions that have been subject to natural selection with the $F_{ST}$ estimates. The contents of presentation are not new; however, this review clarifies how the application of the methods to CNV data, which remains largely unexplored, is possible. A hierarchical Bayesian method, which is implemented via Markov Chain Monte Carlo, estimates locus-specific $F_{ST}$ and can identify outlying CNVs loci with large values of FST. By applying this Bayesian method to the publicly available CNV data, we identified the CNV loci that show signals of natural selection, which may elucidate the genetic basis of human disease and diversity.

Joint analysis of binary and continuous data using skewed logit model in developmental toxicity studies (발달 독성학에서 비대칭 로짓 모형을 사용한 이진수 자료와 연속형 자료에 대한 결합분석)

  • Kim, Yeong-hwa;Hwang, Beom Seuk
    • The Korean Journal of Applied Statistics
    • /
    • v.33 no.2
    • /
    • pp.123-136
    • /
    • 2020
  • It is common to encounter correlated multiple outcomes measured on the same subject in various research fields. In developmental toxicity studies, presence of malformed pups and fetal weight are measured on the pregnant dams exposed to different levels of a toxic substance. Joint analysis of such two outcomes can result in more efficient inferences than separate models for each outcome. Most methods for joint modeling assume a normal distribution as random effects. However, in developmental toxicity studies, the response distributions may change irregularly in location and shape as the level of toxic substance changes, which may not be captured by a normal random effects model. Motivated by applications in developmental toxicity studies, we propose a Bayesian joint model for binary and continuous outcomes. In our model, we incorporate a skewed logit model for the binary outcome to allow the response distributions to have flexibly in both symmetric and asymmetric shapes on the toxic levels. We apply our proposed method to data from a developmental toxicity study of diethylhexyl phthalate.

Model-Based Survival Estimates of Female Breast Cancer Data

  • Khan, Hafiz Mohammad Rafiqullah;Saxena, Anshul;Gabbidon, Kemesha;Rana, Sagar;Ahmed, Nasar Uddin
    • Asian Pacific Journal of Cancer Prevention
    • /
    • v.15 no.6
    • /
    • pp.2893-2900
    • /
    • 2014
  • Background: Statistical methods are very important to precisely measure breast cancer patient survival times for healthcare management. Previous studies considered basic statistics to measure survival times without incorporating statistical modeling strategies. The objective of this study was to develop a data-based statistical probability model from the female breast cancer patients' survival times by using the Bayesian approach to predict future inferences of survival times. Materials and Methods: A random sample of 500 female patients was selected from the Surveillance Epidemiology and End Results cancer registry database. For goodness of fit, the standard model building criteria were used. The Bayesian approach is used to obtain the predictive survival times from the data-based Exponentiated Exponential Model. Markov Chain Monte Carlo method was used to obtain the summary results for predictive inference. Results: The highest number of female breast cancer patients was found in California and the lowest in New Mexico. The majority of them were married. The mean (SD) age at diagnosis (in years) was 60.92 (14.92). The mean (SD) survival time (in months) for female patients was 90.33 (83.10). The Exponentiated Exponential Model found better fits for the female survival times compared to the Exponentiated Weibull Model. The Bayesian method is used to obtain predictive inference for future survival times. Conclusions: The findings with the proposed modeling strategy will assist healthcare researchers and providers to precisely predict future survival estimates as the recent growing challenges of analyzing healthcare data have created new demand for model-based survival estimates. The application of Bayesian will produce precise estimates of future survival times.