• Title/Summary/Keyword: monte carlo methods

Search Result 948, Processing Time 0.026 seconds

Probabilistic study on buildings with MTMD system in different seismic performance levels

  • Etedali, Sadegh
    • Structural Engineering and Mechanics
    • /
    • v.81 no.4
    • /
    • pp.429-441
    • /
    • 2022
  • A probabilistic assessment of the seismic-excited buildings with a multiple-tuned-mass-damper (MTMD) system is carried out in the presence of uncertainties of the structural model, MTMD system, and the stochastic model of the seismic excitations. A free search optimization procedure of the individual mass, stiffness and, damping parameters of the MTMD system based on the snap-drift cuckoo search (SDCS) optimization algorithm is proposed for the optimal design of the MTMD system. Considering a 10-story structure in three cases equipped with single tuned mass damper (STMS), 5-TMD and 10-TMD, sensitivity analyses are carried out using Sobol' indices based on the Monte Carlo simulation (MCS) method. Considering different seismic performance levels, the reliability analyses are done using MCS and kriging-based MCS methods. The results show the maximum structural responses are more affected by changes in the PGA and the stiffness coefficients of the structural floors and TMDs. The results indicate the kriging-based MCS method can estimate the accurate amount of failure probability by spending less time than the MCS. The results also show the MTMD gives a significant reduction in the structural failure probability. The effect of the MTMD on the reduction of the failure probability is remarkable in the performance levels of life safety and collapse prevention. The maximum drift of floors may be reduced for the nominal structural system by increasing the TMDs, however, the complexity of the MTMD model and increasing its corresponding uncertainty sources can be caused a slight increase in the failure probability of the structure.

Validation of OpenDrift-Based Drifter Trajectory Prediction Technique for Maritime Search and Rescue

  • Ji-Chang Kim;Dae, Hun, Yu;Jung-eun Sim;Young-Tae Son;Ki-Young Bang;Sungwon Shin
    • Journal of Ocean Engineering and Technology
    • /
    • v.37 no.4
    • /
    • pp.145-157
    • /
    • 2023
  • Due to a recent increase in maritime activities in South Korea, the frequency of maritime distress is escalating and poses a significant threat to lives and property. The aim of this study was to validate a drift trajectory prediction technique to help mitigate the damages caused by maritime distress incidents. In this study, OpenDrift was verified using satellite drifter data from the Korea Hydrographic and Oceanographic Agency. OpenDrift is a Monte-Carlo-based Lagrangian trajectory modeling framework that allows for considering leeway, an important factor in predicting the movement of floating marine objects. The simulation results showed no significant differences in the performance of drift trajectory prediction when considering leeway using four evaluation methods (normalized cumulative Lagrangian separation, root mean squared error, mean absolute error, and Euclidean distance). However, leeway improved the performance in an analysis of location prediction conformance for maritime search and rescue operations. Therefore, the findings of this study suggest that it is important to consider leeway in drift trajectory prediction for effective maritime search and rescue operations. The results could help with future research on drift trajectory prediction of various floating objects, including marine debris, satellite drifters, and sea ice.

Modeling and simulation of VERA core physics benchmark using OpenMC code

  • Abdullah O. Albugami;Abdullah S. Alomari;Abdullah I. Almarshad
    • Nuclear Engineering and Technology
    • /
    • v.55 no.9
    • /
    • pp.3388-3400
    • /
    • 2023
  • Detailed analysis of the neutron pathway through matter inside the nuclear reactor core is exceedingly needed for safety and economic considerations. Due to the constant development of high-performance computing technologies, neutronics analysis using computer codes became more effective and efficient to perform sophisticated neutronics calculations. In this work, a commercial pressurized water reactor (PWR) presented by Virtual Environment for Reactor Applications (VERA) Core Physics Benchmark are modeled and simulated using a high-fidelity simulation of OpenMC code in terms of criticality and fuel pin power distribution. Various problems have been selected from VERA benchmark ranging from a simple two-dimension (2D) pin cell problem to a complex three dimension (3D) full core problem. The development of the code capabilities for reactor physics methods has been implemented to investigate the accuracy and performance of the OpenMC code against VERA SCALE codes. The results of OpenMC code exhibit excellent agreement with VERA results with maximum Root Mean Square Error (RMSE) values of less than 0.04% and 1.3% for the criticality eigenvalues and pin power distributions, respectively. This demonstrates the successful utilization of the OpenMC code as a simulation tool for a whole core analysis. Further works are undergoing on the accuracy of OpenMC simulations for the impact of different fuel types and burnup levels and the analysis of the transient behavior and coupled thermal hydraulic feedback.

The impact of fuel depletion scheme within SCALE code on the criticality of spent fuel pool with RBMK fuel assemblies

  • Andrius Slavickas;Tadas Kaliatka;Raimondas Pabarcius;Sigitas Rimkevicius
    • Nuclear Engineering and Technology
    • /
    • v.54 no.12
    • /
    • pp.4731-4742
    • /
    • 2022
  • RBMK fuel assemblies differ from other LWR FA due to a specific arrangement of the fuel rods, the low enrichment, and the used burnable absorber - erbium. Therefore, there is a challenge to adapt modeling tools, developed for other LWR types, to solve RBMK problems. A set of 10 different depletion simulation schemes were tested to estimate the impact on reactivity and spent fuel composition of possible SCALE code options for the neutron transport modelling and the use of different nuclear data libraries. The simulations were performed using cross-section libraries based on both, VII.0 and VII.1, versions of ENDF/B nuclear data, and assuming continuous energy and multigroup simulation modes, standard and user-defined Dancoff factor values, and employing deterministic and Monte Carlo methods. The criticality analysis with burn-up credit was performed for the SFP loaded with RBMK-1500 FA. Spent fuel compositions were taken from each of 10 performed depletion simulations. The criticality of SFP is found to be overestimated by up to 0.08% in simulation cases using user-defined Dancoff factors comparing the results obtained using the continuous energy library (VII.1 version of ENDF/B nuclear data). It was shown that such discrepancy is determined by the higher U-235 and Pu-239 isotopes concentrations calculated.

GARCH-X(1, 1) model allowing a non-linear function of the variance to follow an AR(1) process

  • Didit B Nugroho;Bernadus AA Wicaksono;Lennox Larwuy
    • Communications for Statistical Applications and Methods
    • /
    • v.30 no.2
    • /
    • pp.163-178
    • /
    • 2023
  • GARCH-X(1, 1) model specifies that conditional variance follows an AR(1) process and includes a past exogenous variable. This study proposes a new class from that model by allowing a more general (non-linear) variance function to follow an AR(1) process. The functions applied to the variance equation include exponential, Tukey's ladder, and Yeo-Johnson transformations. In the framework of normal and student-t distributions for return errors, the empirical analysis focuses on two stock indices data in developed countries (FTSE100 and SP500) over the daily period from January 2000 to December 2020. This study uses 10-minute realized volatility as the exogenous component. The parameters of considered models are estimated using the adaptive random walk metropolis method in the Monte Carlo Markov chain algorithm and implemented in the Matlab program. The 95% highest posterior density intervals show that the three transformations are significant for the GARCHX(1, 1) model. In general, based on the Akaike information criterion, the GARCH-X(1, 1) model that has return errors with student-t distribution and variance transformed by Tukey's ladder function provides the best data fit. In forecasting value-at-risk with the 95% confidence level, the Christoffersen's independence test suggest that non-linear models is the most suitable for modeling return data, especially model with the Tukey's ladder transformation.

Multi-objective path planning for mobile robot in nuclear accident environment based on improved ant colony optimization with modified A*

  • De Zhang;Run Luo;Ye-bo Yin;Shu-liang Zou
    • Nuclear Engineering and Technology
    • /
    • v.55 no.5
    • /
    • pp.1838-1854
    • /
    • 2023
  • This paper presents a hybrid algorithm to solve the multi-objective path planning (MOPP) problem for mobile robots in a static nuclear accident environment. The proposed algorithm mimics a real nuclear accident site by modeling the environment with a two-layer cost grid map based on geometric modeling and Monte Carlo calculations. The proposed algorithm consists of two steps. The first step optimizes a path by the hybridization of improved ant colony optimization algorithm-modified A* (IACO-A*) that minimizes path length, cumulative radiation dose and energy consumption. The second module is the high radiation dose rate avoidance strategy integrated with the IACO-A* algorithm, which will work when the mobile robots sense the lethal radiation dose rate, avoiding radioactive sources with high dose levels. Simulations have been performed under environments of different complexity to evaluate the efficiency of the proposed algorithm, and the results show that IACO-A* has better path quality than ACO and IACO. In addition, a study comparing the proposed IACO-A* algorithm and recent path planning (PP) methods in three scenarios has been performed. The simulation results show that the proposed IACO-A* IACO-A* algorithm is obviously superior in terms of stability and minimization the total cost of MOPP.

Identification of Contaminant Injection in Water Distribution Network

  • Marlim, Malvin Samuel;Kang, Doosun
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2020.06a
    • /
    • pp.114-114
    • /
    • 2020
  • Water contamination in a water distribution network (WDN) is harmful since it directly induces the consumer's health problem and suspends water service in a wide area. Actions need to be taken rapidly to countermeasure a contamination event. A contaminant source ident ification (CSI) is an important initial step to mitigate the harmful event. Here, a CSI approach focused on determining the contaminant intrusion possible location and time (PLoT) is introduced. One of the methods to discover the PLoT is an inverse calculation to connect all the paths leading to the report specification of a sensor. A filtering procedure is then applied to narrow down the PLoT using the results from individual sensors. First, we spatially reduce the suspect intrusion points by locating the highly suspicious nodes that have similar intrusion time. Then, we narrow the possible intrusion time by matching the suspicious intrusion time to the reported information. Finally, a likelihood-score is estimated for each suspect. Another important aspect that needs to be considered in CSI is that there are inherent uncertainties, such as the variations in user demand and inaccuracy of sensor data. The uncertainties can lead to overlooking the real intrusion point and time. To reflect the uncertainties in the CSI process, the Monte-Carlo Simulation (MCS) is conducted to explore the ranges of PLoT. By analyzing all the accumulated scores through the random sets, a spread of contaminant intrusion PLoT can then be identified in the network.

  • PDF

Contribution of Microbleeds on Microvascular Magnetic Resonance Imaging Signal

  • Chang Hyun Yoo;Junghwan Goh;Geon-Ho Jahng
    • Progress in Medical Physics
    • /
    • v.33 no.4
    • /
    • pp.88-100
    • /
    • 2022
  • Purpose: Cerebral microbleeds are more susceptible than surrounding tissues and have been associated with a variety of neurological and neurodegenerative disorders that are indicative of an underlying vascular pathology. We investigated relaxivity changes and microvascular indices in the presence of microbleeds in an imaging voxel by evaluating those before and after contrast agent injection. Methods: Monte Carlo simulations were run with a variety of conditions, including different magnetic field strengths (B0), different echo times, and different contrast agents. ΔR2* and ΔR2 and microvascular indices were calculated with varying microvascular vessel sizes and microbleed loads. Results: As B0 and the concentration of microbleeds increased, 𝜟R2* and 𝜟R2 increased. 𝜟R2* increased, but 𝜟R2 decreased slightly as the vessel radius increased. When the vessel radius was increased, the vessel size index (VSI) and mean vessel diameter (mVD) increased, and all other microvascular indices except mean vessel density (Q) increased when the concentration of microbleeds was increased. Conclusions: Because patients with neurodegenerative diseases often have microbleeds in their brains and VSI and mVD increase with increasing microbleeds, microbleeds can be altered microvascular signals in a voxel in the brain of a neurodegenerative disease at 3T magnetic resonance imaging.

Number of Scatterings in Random Walks

  • Kwang-Il Seon;Hyung-Joe Kwon;Hee-Gyeong Kim;Hyeon Jeong Youn
    • Journal of The Korean Astronomical Society
    • /
    • v.56 no.2
    • /
    • pp.287-292
    • /
    • 2023
  • This paper investigates the number of scatterings a photon undergoes in random walks before escaping from a medium. The number of scatterings in random walk processes is commonly approximated as τ + τ2 in the literature, where τ is the optical thickness measured from the center of the medium. However, it is found that this formula is not accurate. In this study, analytical solutions in sphere and slab geometries are derived for both optically thin and optically thick limits, assuming isotropic scattering. These solutions are verified using Monte Carlo simulations. In the optically thick limit, the number of scatterings is found to be 0.5 τ2 and 1.5 τ2 in a sphere and slab, respectively. In the optically thin limit, the number of scatterings is ≈ τ in a sphere and ≈ τ (1 - γ - ln τ + τ) in a slab, where γ ≃ 0.57722 is the Euler-Mascheroni constant. Additionally, we present approximate formulas that reasonably reproduce the simulation results well in intermediate optical depths. These results are applicable to scattering processes that exhibit forward and backward symmetry, including both isotropic and Thomson scattering.

Neutron dosimetry with a pair of TLDs for the Elekta Precise medical linac and the evaluation of optimum moderator thickness for the conversion of fast to thermal neutrons

  • Marziyeh Behmadi;Sara Mohammadi;Mohammad Ehsan Ravari;Aghil Mohammadi;Mahdy Ebrahimi Loushab;Mohammad Taghi Bahreyni Toossi;Mitra Ghergherehchi
    • Nuclear Engineering and Technology
    • /
    • v.56 no.2
    • /
    • pp.753-761
    • /
    • 2024
  • Introduction: In this study, TLD 600 and TLD 700 pairs were used to measure the neutron dose of Elekta Precise medical linac. To this end, the optimum moderate thickness for the conversion of fast to thermal neutrons were evaluated. Materials and methods: 241Am-Be and 252Cf sources were simulated to calculate the optimum thicknesses of the moderator for the conversion of maximum fast neutrons (FN) into thermal neutrons (TN). Pair TLDs were used to measure F&TN doses for three different field sizes at four depths of the medical linac. Results: The maximum thickness of the moderator was optimized at 6 cm. The measurement results demonstrated that the TN dose increased with the expansion of field size and depth. The FN dose, which was converted TN, exhibits behaviors comparable to the TN due to its nature. Conclusion: This study presents the optimum thickness for the moderator to convert FN into TN and measure F&TN using TLDs.