• Title/Summary/Keyword: Uncertainty quantification

Search Result 164, Processing Time 0.023 seconds

Development of an uncertainty quantification approach with reduced computational cost for seismic fragility assessment of cable-stayed bridges

  • Akhoondzade-Noghabi, Vahid;Bargi, Khosrow
    • Earthquakes and Structures
    • /
    • v.23 no.4
    • /
    • pp.385-401
    • /
    • 2022
  • Uncertainty quantification is the most important challenge in seismic fragility assessment of structures. The precision increment of the quantification method leads to reliable results but at the same time increases the computational costs and the latter will be so undesirable in cases such as reliability-based design optimization which includes numerous probabilistic seismic analyses. Accordingly, the authors' effort has been put on the development and validation of an approach that has reduced computational cost in seismic fragility assessment. In this regard, it is necessary to apply the appropriate methods for consideration of two categories of uncertainties consisting of uncertainties related to the ground motions and structural characteristics, separately. Also, cable-stayed bridges have been specifically selected because as a result of their complexity and the according time-consuming seismic analyses, reducing the computations corresponding to their fragility analyses is worthy of studying. To achieve this, the fragility assessment of three case studies is performed based on existing and proposed approaches, and a comparative study on the efficiency in the estimation of seismic responses. For this purpose, statistical validation is conducted on the seismic demand and fragility resulting from the mentioned approaches, and through a comprehensive interpretation, sufficient arguments for the acceptable errors of the proposed approach are presented. Finally, this study concludes that the combination of the Capacity Spectrum Method (CSM) and Uniform Design Sampling (UDS) in advanced proposed forms can provide adequate accuracy in seismic fragility estimation at a significantly reduced computational cost.

Advanced Computational Dissipative Structural Acoustics and Fluid-Structure Interaction in Low-and Medium-Frequency Domains. Reduced-Order Models and Uncertainty Quantification

  • Ohayon, R.;Soize, C.
    • International Journal of Aeronautical and Space Sciences
    • /
    • v.13 no.2
    • /
    • pp.127-153
    • /
    • 2012
  • This paper presents an advanced computational method for the prediction of the responses in the frequency domain of general linear dissipative structural-acoustic and fluid-structure systems, in the low-and medium-frequency domains and this includes uncertainty quantification. The system under consideration is constituted of a deformable dissipative structure that is coupled with an internal dissipative acoustic fluid. This includes wall acoustic impedances and it is surrounded by an infinite acoustic fluid. The system is submitted to given internal and external acoustic sources and to the prescribed mechanical forces. An efficient reduced-order computational model is constructed by using a finite element discretization for the structure and an internal acoustic fluid. The external acoustic fluid is treated by using an appropriate boundary element method in the frequency domain. All the required modeling aspects for the analysis of the medium-frequency domain have been introduced namely, a viscoelastic behavior for the structure, an appropriate dissipative model for the internal acoustic fluid that includes wall acoustic impedance and a model of uncertainty in particular for the modeling errors. This advanced computational formulation, corresponding to new extensions and complements with respect to the state-of-the-art are well adapted for the development of a new generation of software, in particular for parallel computers.

Uncertainty quantification for structural health monitoring applications

  • Nasr, Dana E.;Slika, Wael G.;Saad, George A.
    • Smart Structures and Systems
    • /
    • v.22 no.4
    • /
    • pp.399-411
    • /
    • 2018
  • The difficulty in modeling complex nonlinear structures lies in the presence of significant sources of uncertainties mainly attributed to sudden changes in the structure's behavior caused by regular aging factors or extreme events. Quantifying these uncertainties and accurately representing them within the complex mathematical framework of Structural Health Monitoring (SHM) are significantly essential for system identification and damage detection purposes. This study highlights the importance of uncertainty quantification in SHM frameworks, and presents a comparative analysis between intrusive and non-intrusive techniques in quantifying uncertainties for SHM purposes through two different variations of the Kalman Filter (KF) method, the Ensemble Kalman filter (EnKF) and the Polynomial Chaos Kalman Filter (PCKF). The comparative analysis is based on a numerical example that consists of a four degrees-of-freedom (DOF) system, comprising Bouc-Wen hysteretic behavior and subjected to El-Centro earthquake excitation. The comparison is based on the ability of each technique to quantify the different sources of uncertainty for SHM purposes and to accurately approximate the system state and parameters when compared to the true state with the least computational burden. While the results show that both filters are able to locate the damage in space and time and to accurately estimate the system responses and unknown parameters, the computational cost of PCKF is shown to be less than that of EnKF for a similar level of numerical accuracy.

Measurement uncertainty for QC/QA applied to the chemical analysis (화학 분석 결과의 QA/QC를 위한 측정 불확도)

  • Woo, Jin-Chun;Oh, Sang-Hyub;Kim, Byoung-Moon;Bae, Hyun-Kil;Kim, Kwang-Sub;Kim, Young-Doo
    • Analytical Science and Technology
    • /
    • v.18 no.6
    • /
    • pp.475-482
    • /
    • 2005
  • The expression of uncertainty applied to the chemical analysis is highly recommended with increasing demands upon the systematic quality assurance and control(QA/QC) with ISO 17025. For the quantification of quality source, 7 major common sources of uncertainty, normally contributing to the quality of the chemical analysis, were selected from QA/QC literatures of chemical analysis. They were classified into repeatability, drift, uncertainty in standards, linearity of calibration, homogeneity, stability of sample, and matrix effect. And, the quantification of the sources by means of measurement uncertainty was proposed as a prerequisite steps for QA/QC. Examples applied to the quantification procedures of modelling, combination and expression of standard uncertainty for the 7 major common sources were presented as a reference guide for QA/QC in chemical analysis.

Uncertainty Analysis of Quantitative Radar Rainfall Estimation Using the Maximum Entropy (Maximum Entropy를 이용한 정량적 레이더 강우추정 불확실성 분석)

  • Lee, Jae-Kyoung
    • Atmosphere
    • /
    • v.25 no.3
    • /
    • pp.511-520
    • /
    • 2015
  • Existing studies on radar rainfall uncertainties were performed to reduce the uncertainty for each stage by using bias correction during the quantitative radar rainfall estimation process. However, the studies do not provide quantitative comparison with the uncertainties for all stages. Consequently, this study proposes a suitable approach that can quantify the uncertainties at each stage of the quantitative radar rainfall estimation process. First, the new approach can present initial and final uncertainties, increasing or decreasing the uncertainty, and the uncertainty percentage at each stage. Furthermore, Maximum Entropy (ME) was applied to quantify the uncertainty in the entire process. Second, for the uncertainty quantification of radar rainfall estimation at each stage, this study used two quality control algorithms, two rainfall estimation relations, and two bias correction techniques as post-processing and progressed through all stages of the radar rainfall estimation. For the proposed approach, the final uncertainty (ME = 3.81) from the ME of the bias correction stage was the smallest while the uncertainty of the rainfall estimation stage was higher because of the use of an unsuitable relation. Additionally, the ME of the quality control was at 4.28 (112.34%), while that of the rainfall estimation was at 4.53 (118.90%), and that of the bias correction at 3.81 (100%). However, this study also determined that selecting the appropriate method for each stage would gradually reduce the uncertainty at each stage. Finally, the uncertainty due to natural variability was 93.70% of the final uncertainty. Thus, the results indicate that this new approach can contribute significantly to the field of uncertainty estimation and help with estimating more accurate radar rainfall.

Uncertainty quantification of PWR spent fuel due to nuclear data and modeling parameters

  • Ebiwonjumi, Bamidele;Kong, Chidong;Zhang, Peng;Cherezov, Alexey;Lee, Deokjung
    • Nuclear Engineering and Technology
    • /
    • v.53 no.3
    • /
    • pp.715-731
    • /
    • 2021
  • Uncertainties are calculated for pressurized water reactor (PWR) spent nuclear fuel (SNF) characteristics. The deterministic code STREAM is currently being used as an SNF analysis tool to obtain isotopic inventory, radioactivity, decay heat, neutron and gamma source strengths. The SNF analysis capability of STREAM was recently validated. However, the uncertainty analysis is yet to be conducted. To estimate the uncertainty due to nuclear data, STREAM is used to perturb nuclear cross section (XS) and resonance integral (RI) libraries produced by NJOY99. The perturbation of XS and RI involves the stochastic sampling of ENDF/B-VII.1 covariance data. To estimate the uncertainty due to modeling parameters (fuel design and irradiation history), surrogate models are built based on polynomial chaos expansion (PCE) and variance-based sensitivity indices (i.e., Sobol' indices) are employed to perform global sensitivity analysis (GSA). The calculation results indicate that uncertainty of SNF due to modeling parameters are also very important and as a result can contribute significantly to the difference of uncertainties due to nuclear data and modeling parameters. In addition, the surrogate model offers a computationally efficient approach with significantly reduced computation time, to accurately evaluate uncertainties of SNF integral characteristics.

Verification of Reduced Order Modeling based Uncertainty/Sensitivity Estimator (ROMUSE)

  • Khuwaileh, Bassam;Williams, Brian;Turinsky, Paul;Hartanto, Donny
    • Nuclear Engineering and Technology
    • /
    • v.51 no.4
    • /
    • pp.968-976
    • /
    • 2019
  • This paper presents a number of verification case studies for a recently developed sensitivity/uncertainty code package. The code package, ROMUSE (Reduced Order Modeling based Uncertainty/Sensitivity Estimator) is an effort to provide an analysis tool to be used in conjunction with reactor core simulators, in particular the Virtual Environment for Reactor Applications (VERA) core simulator. ROMUSE has been written in C++ and is currently capable of performing various types of parameter perturbations and associated sensitivity analysis, uncertainty quantification, surrogate model construction and subspace analysis. The current version 2.0 has the capability to interface with the Design Analysis Kit for Optimization and Terascale Applications (DAKOTA) code, which gives ROMUSE access to the various algorithms implemented within DAKOTA, most importantly model calibration. The verification study is performed via two basic problems and two reactor physics models. The first problem is used to verify the ROMUSE single physics gradient-based range finding algorithm capability using an abstract quadratic model. The second problem is the Brusselator problem, which is a coupled problem representative of multi-physics problems. This problem is used to test the capability of constructing surrogates via ROMUSE-DAKOTA. Finally, light water reactor pin cell and sodium-cooled fast reactor fuel assembly problems are simulated via SCALE 6.1 to test ROMUSE capability for uncertainty quantification and sensitivity analysis purposes.

McCARD/MIG stochastic sampling calculations for nuclear cross section sensitivity and uncertainty analysis

  • Ho Jin Park
    • Nuclear Engineering and Technology
    • /
    • v.54 no.11
    • /
    • pp.4272-4279
    • /
    • 2022
  • In this study, a cross section stochastic sampling (S.S.) capability is implemented into both the McCARD continuous energy Monte Carlo code and MIG multiple-correlated data sampling code. The ENDF/B-VII.1 covariance data based 30 group cross section sets and the SCALE6 covariance data based 44 group cross section sets are sampled by the MIG code. Through various uncertainty quantification (UQ) benchmark calculations, the McCARD/MIG results are verified to be consistent with the McCARD stand-alone sensitivity/uncertainty (S/U) results and the XSUSA S.S. results. UQ analyses for Three Mile Island Unit 1, Peach Bottom Unit 2, and Kozloduy-6 fuel pin problems are conducted to provide the uncertainties of keff and microscopic and macroscopic cross sections by the McCARD/MIG code system. Moreover, the SNU S/U formulations for uncertainty propagation in a MC depletion analysis are validated through a comparison with the McCARD/MIG S.S. results for the UAM Exercise I-1b burnup benchmark. It is therefore concluded that the SNU formulation based on the S/U method has the capability to accurately estimate the uncertainty propagation in a MC depletion analysis.

Analyzing nuclear reactor simulation data and uncertainty with the group method of data handling

  • Radaideh, Majdi I.;Kozlowski, Tomasz
    • Nuclear Engineering and Technology
    • /
    • v.52 no.2
    • /
    • pp.287-295
    • /
    • 2020
  • Group method of data handling (GMDH) is considered one of the earliest deep learning methods. Deep learning gained additional interest in today's applications due to its capability to handle complex and high dimensional problems. In this study, multi-layer GMDH networks are used to perform uncertainty quantification (UQ) and sensitivity analysis (SA) of nuclear reactor simulations. GMDH is utilized as a surrogate/metamodel to replace high fidelity computer models with cheap-to-evaluate surrogate models, which facilitate UQ and SA tasks (e.g. variance decomposition, uncertainty propagation, etc.). GMDH performance is validated through two UQ applications in reactor simulations: (1) low dimensional input space (two-phase flow in a reactor channel), and (2) high dimensional space (8-group homogenized cross-sections). In both applications, GMDH networks show very good performance with small mean absolute and squared errors as well as high accuracy in capturing the target variance. GMDH is utilized afterward to perform UQ tasks such as variance decomposition through Sobol indices, and GMDH-based uncertainty propagation with large number of samples. GMDH performance is also compared to other surrogates including Gaussian processes and polynomial chaos expansions. The comparison shows that GMDH has competitive performance with the other methods for the low dimensional problem, and reliable performance for the high dimensional problem.

Enhancing the radar-based mean areal precipitation forecasts to improve urban flood predictions and uncertainty quantification

  • Nguyen, Duc Hai;Kwon, Hyun-Han;Yoon, Seong-Sim;Bae, Deg-Hyo
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2020.06a
    • /
    • pp.123-123
    • /
    • 2020
  • The present study is aimed to correcting radar-based mean areal precipitation forecasts to improve urban flood predictions and uncertainty analysis of water levels contributed at each stage in the process. For this reason, a long short-term memory (LSTM) network is used to reproduce three-hour mean areal precipitation (MAP) forecasts from the quantitative precipitation forecasts (QPFs) of the McGill Algorithm for Precipitation nowcasting by Lagrangian Extrapolation (MAPLE). The Gangnam urban catchment located in Seoul, South Korea, was selected as a case study for the purpose. A database was established based on 24 heavy rainfall events, 22 grid points from the MAPLE system and the observed MAP values estimated from five ground rain gauges of KMA Automatic Weather System. The corrected MAP forecasts were input into the developed coupled 1D/2D model to predict water levels and relevant inundation areas. The results indicate the viability of the proposed framework for generating three-hour MAP forecasts and urban flooding predictions. For the analysis uncertainty contributions of the source related to the process, the Bayesian Markov Chain Monte Carlo (MCMC) using delayed rejection and adaptive metropolis algorithm is applied. For this purpose, the uncertainty contributions of the stages such as QPE input, QPF MAP source LSTM-corrected source, and MAP input and the coupled model is discussed.

  • PDF