• Title/Summary/Keyword: Monte-Carlo algorithm

Search Result 506, Processing Time 0.025 seconds

Comparison between Old and New Versions of Electron Monte Carlo (eMC) Dose Calculation

  • Seongmoon Jung;Jaeman Son;Hyeongmin Jin;Seonghee Kang;Jong Min Park;Jung-in Kim;Chang Heon Choi
    • Progress in Medical Physics
    • /
    • v.34 no.2
    • /
    • pp.15-22
    • /
    • 2023
  • This study compared the dose calculated using the electron Monte Carlo (eMC) dose calculation algorithm employing the old version (eMC V13.7) of the Varian Eclipse treatment-planning system (TPS) and its newer version (eMC V16.1). The eMC V16.1 was configured using the same beam data as the eMC V13.7. Beam data measured using the VitalBeam linear accelerator were implemented. A box-shaped water phantom (30×30×30 cm3) was generated in the TPS. Consequently, the TPS with eMC V13.7 and eMC V16.1 calculated the dose to the water phantom delivered by electron beams of various energies with a field size of 10×10 cm2. The calculations were repeated while changing the dose-smoothing levels and normalization method. Subsequently, the percentage depth dose and lateral profile of the dose distributions acquired by eMC V13.7 and eMC V16.1 were analyzed. In addition, the dose-volume histogram (DVH) differences between the two versions for the heterogeneous phantom with bone and lung inserted were compared. The doses calculated using eMC V16.1 were similar to those calculated using eMC V13.7 for the homogenous phantoms. However, a DVH difference was observed in the heterogeneous phantom, particularly in the bone material. The dose distribution calculated using eMC V16.1 was comparable to that of eMC V13.7 in the case of homogenous phantoms. The version changes resulted in a different DVH for the heterogeneous phantoms. However, further investigations to assess the DVH differences in patients and experimental validations for eMC V16.1, particularly for heterogeneous geometry, are required.

The extension of a continuous beliefs system and analyzing herd behavior in stock markets (연속신념시스템의 확장모형을 이용한 주식시장의 군집행동 분석)

  • Park, Beum-Jo
    • Economic Analysis
    • /
    • v.17 no.2
    • /
    • pp.27-55
    • /
    • 2011
  • Although many theoretical studies have tried to explain the volatility in financial markets using models of herd behavior, there have been few empirical studies on dynamic herding due to the technical difficulty of detecting herd behavior with time-series data. Thus, this paper theoretically extends a continuous beliefs system belonging to an agent based economic model by introducing a term representing agents'mutual dependence into each agent's utility function and derives a SV(stochastic volatility)-type econometric model. From this model the time-varying herding parameters are efficiently estimated by a Markov chain Monte Carlo method. Using monthly data of KOSPI and DOW, this paper provides some empirical evidences for stronger herding in the Korean stock market than in the U.S. stock market, and further stronger herding after the global financial crisis than before it. More interesting finding is that time-varying herd behavior has weak autocorrelation and the global financial crisis may increase its volatility significantly.

A comparison study between the realistic random modeling and simplified porous medium for gamma-gamma well-logging

  • Fatemeh S. Rasouli
    • Nuclear Engineering and Technology
    • /
    • v.56 no.5
    • /
    • pp.1747-1753
    • /
    • 2024
  • The accurate determination of formation density and the physical properties of rocks is the most critical logging tasks which can be obtained using gamma-ray transport and detection tools. Though the simulation works published so far have considerably improved the knowledge of the parameters that govern the responses of the detectors in these tools, recent studies have found considerable differences between the results of using a conventional model of a homogeneous mixture of formation and fluid and an inhomogeneous fractured medium. It has increased concerns about the importance of the complexity of the model used for the medium in simulation works. In the present study, we have suggested two various models for the flow of the fluid in porous media and fractured rock to be used for logging purposes. For a typical gamma-gamma logging tool containing a 137Cs source and two NaI detectors, simulated by using the MCNPX code, a simplified porous (SP) model in which the formation is filled with elongated rectangular cubes loaded with either mineral material or oil was investigated. In this model, the oil directly reaches the top of the medium and the connection between the pores is not guaranteed. In the other model, the medium is a large 3-D matrix of 1 cm3 randomly filled cubes. The designed algorithm to fill the matrix sites is so that this realistic random (RR) model provides the continuum growth of oil flow in various disordered directions and, therefore, fulfills the concerns about modeling the rock textures consist of extremely complex pore structures. For an arbitrary set of oil concentrations and various formation materials, the response of the detectors in the logging tool has been considered as a criterion to assess the effect of modeling for the distribution of pores in the formation on simulation studies. The results show that defining a RR model for describing heterogeneities of a porous medium does not effectively improve the prediction of the responses of logging tools. Taking into account the computational cost of the particle transport in the complex geometries in the Monte Carlo method, the SP model can be satisfactory for gamma-gamma logging purposes.

Automatic velocity analysis using bootstrapped differential semblance and global search methods (고해상도 속도스펙트럼과 전역탐색법을 이용한 자동속도분석)

  • Choi, Hyung-Wook;Byun, Joong-Moo;Seol, Soon-Jee
    • Geophysics and Geophysical Exploration
    • /
    • v.13 no.1
    • /
    • pp.31-39
    • /
    • 2010
  • The goal of automatic velocity analysis is to extract accurate velocity from voluminous seismic data with efficiency. In this study, we developed an efficient automatic velocity analysis algorithm by using bootstrapped differential semblance (BDS) and Monte Carlo inversion. To estimate more accurate results from automatic velocity analysis, the algorithm we have developed uses BDS, which provides a higher velocity resolution than conventional semblance, as a coherency estimator. In addition, our proposed automatic velocity analysis module is performed with a conditional initial velocity determination step that leads to enhanced efficiency in running time of the module. A new optional root mean square (RMS) velocity constraint, which prevents picking false peaks, is used. The developed automatic velocity analysis module was tested on a synthetic dataset and a marine field dataset from the East Sea, Korea. The stacked sections made using velocity results from our algorithm showed coherent events and improved the quality of the normal moveout-correction result. Moreover, since our algorithm finds interval velocity ($\nu_{int}$) first with interval velocity constraints and then calculates a RMS velocity function from the interval velocity, we can estimate geologically reasonable interval velocities. Boundaries of interval velocities also match well with reflection events in the common midpoint stacked sections.

At-site Low Flow Frequency Analysis Using Bayesian MCMC: I. Theoretical Background and Construction of Prior Distribution (Bayesian MCMC를 이용한 저수량 점 빈도분석: I. 이론적 배경과 사전분포의 구축)

  • Kim, Sang-Ug;Lee, Kil-Seong
    • Journal of Korea Water Resources Association
    • /
    • v.41 no.1
    • /
    • pp.35-47
    • /
    • 2008
  • The low flow analysis is an important part in water resources engineering. Also, the results of low flow frequency analysis can be used for design of reservoir storage, water supply planning and design, waste-load allocation, and maintenance of quantity and quality of water for irrigation and wild life conservation. Especially, for identification of the uncertainty in frequency analysis, the Bayesian approach is applied and compared with conventional methodologies in at-site low flow frequency analysis. In the first manuscript, the theoretical background for the Bayesian MCMC (Bayesian Markov Chain Monte Carlo) method and Metropolis-Hasting algorithm are studied. Two types of the prior distribution, a non-data- based and a data-based prior distributions are developed and compared to perform the Bayesian MCMC method. It can be suggested that the results of a data-based prior distribution is more effective than those of a non-data-based prior distribution. The acceptance rate of the algorithm is computed to assess the effectiveness of the developed algorithm. In the second manuscript, the Bayesian MCMC method using a data-based prior distribution and MLE(Maximum Likelihood Estimation) using a quadratic approximation are performed for the at-site low flow frequency analysis.

An improved response surface method for reliability analysis of structures

  • Basaga, Hasan Basri;Bayraktar, Alemdar;Kaymaz, Irfan
    • Structural Engineering and Mechanics
    • /
    • v.42 no.2
    • /
    • pp.175-189
    • /
    • 2012
  • This paper presents an algorithm for structural reliability with the response surface method. For this aim, an approach with three stages is proposed named as improved response surface method. In the algorithm, firstly, a quadratic approximate function is formed and design point is determined with First Order Reliability Method. Secondly, a point close to the exact limit state function is searched using the design point. Lastly, vector projected method is used to generate the sample points and Second Order Reliability Method is performed to obtain reliability index and probability of failure. Five numerical examples are selected to illustrate the proposed algorithm. The limit state functions of three examples (cantilever beam, highly nonlinear limit state function and dynamic response of an oscillator) are defined explicitly and the others (frame and truss structures) are defined implicitly. ANSYS finite element program is utilized to obtain the response of the structures which are needed in the reliability analysis of implicit limit state functions. The results (reliability index, probability of failure and limit state function evaluations) obtained from the improved response surface are compared with those of Monte Carlo Simulation, First Order Reliability Method, Second Order Reliability Method and Classical Response Surface Method. According to the results, proposed algorithm gives better results for both reliability index and limit state function evaluations.

Estimation of the Mixture of Normals of Saving Rate Using Gibbs Algorithm (Gibbs알고리즘을 이용한 저축률의 정규분포혼합 추정)

  • Yoon, Jong-In
    • Journal of Digital Convergence
    • /
    • v.13 no.10
    • /
    • pp.219-224
    • /
    • 2015
  • This research estimates the Mixture of Normals of households saving rate in Korea. Our sample is MDSS, micro-data in 2014 and Gibbs algorithm is used to estimate the Mixture of Normals. Evidences say some results. First, Gibbs algorithm works very well in estimating the Mixture of Normals. Second, Saving rate data has at least two components, one with mean zero and the other with mean 29.4%. It might be that households would be separated into high saving group and low saving group. Third, analysis of Mixture of Normals cannot answer that question and we find that income level and age cannot explain our results.

INS/GPS Integrated Smoothing Algorithm for Synthetic Aperture Radar Motion Compensation Using an Extended Kalman Filter with a Position Damping Loop

  • Song, Jin Woo;Park, Chan Gook
    • International Journal of Aeronautical and Space Sciences
    • /
    • v.18 no.1
    • /
    • pp.118-128
    • /
    • 2017
  • In this study, we propose a real time inertial navigation system/global positioning system (INS/GPS) integrated smoothing algorithm based on an extended Kalman filter (EKF) and a position damping loop (PDL) for synthetic aperture radar (SAR). Integrated navigation algorithms usually induce discontinuities due to error correction update by the Kalman filter, which are as detrimental to the performance of SAR as the relative position error. The proposed smoothing algorithm suppresses these discontinuities and also reduces the relative position error in real time. An EKF estimates the navigation errors and sensor biases, and all the errors except for the position error are corrected directly and instantly. A PDL activated during SAR operation period imposes damping effects on the position error estimates, where the estimated position error is corrected smoothly and gradually, which contributes to the real time smoothing and small relative position errors. The residual errors are re-estimated by the EKF to maintain the estimation performance and the stability of the overall loop. The performance improvements were confirmed by Monte Carlo simulations. The simulation results showed that the discontinuities were reduced by 99.8% and the relative position error by 48% compared with a conventional EKF without a smoothing loop, thereby satisfying the basic performance requirements for SAR operation. The proposed algorithm may be applicable to low cost SAR systems which use a conventional INS/GPS without changing their hardware configurations.

Damage assessment of shear buildings by synchronous estimation of stiffness and damping using measured acceleration

  • Shin, Soobong;Oh, Seong Ho
    • Smart Structures and Systems
    • /
    • v.3 no.3
    • /
    • pp.245-261
    • /
    • 2007
  • Nonlinear time-domain system identification (SI) algorithm is proposed to assess damage in a shear building by synchronously estimating time-varying stiffness and damping parameters using measured acceleration data. Mass properties have been assumed as the a priori known information. Viscous damping was utilized for the current research. To chase possible nonlinear dynamic behavior under severe vibration, an incremental governing equation of vibrational motion has been utilized. Stiffness and damping parameters are estimated at each time step by minimizing the response error between measured and computed acceleration increments at the measured degrees-of-freedom. To solve a nonlinear constrained optimization problem for optimal structural parameters, sensitivities of acceleration increment were formulated with respect to stiffness and damping parameters, respectively. Incremental state vectors of vibrational motion were computed numerically by Newmark-${\beta}$ method. No model is pre-defined in the proposed algorithm for recovering the nonlinear response. A time-window scheme together with Monte Carlo iterations was utilized to estimate parameters with noise polluted sparse measured acceleration. A moving average scheme was applied to estimate the time-varying trend of structural parameters in all the examples. To examine the proposed SI algorithm, simulation studies were carried out intensively with sample shear buildings under earthquake excitations. In addition, the algorithm was applied to assess damage with laboratory test data obtained from free vibration on a three-story shear building model.

Damage detection using finite element model updating with an improved optimization algorithm

  • Xu, Yalan;Qian, Yu;Song, Gangbing;Guo, Kongming
    • Steel and Composite Structures
    • /
    • v.19 no.1
    • /
    • pp.191-208
    • /
    • 2015
  • The sensitivity-based finite element model updating method has received increasing attention in damage detection of structures based on measured modal parameters. Finding an optimization technique with high efficiency and fast convergence is one of the key issues for model updating-based damage detection. A new simple and computationally efficient optimization algorithm is proposed and applied to damage detection by using finite element model updating. The proposed method combines the Gauss-Newton method with region truncation of each iterative step, in which not only the constraints are introduced instead of penalty functions, but also the searching steps are restricted in a controlled region. The developed algorithm is illustrated by a numerically simulated 25-bar truss structure, and the results have been compared and verified with those obtained from the trust region method. In order to investigate the reliability of the proposed method in damage detection of structures, the influence of the uncertainties coming from measured modal parameters on the statistical characteristics of detection result is investigated by Monte-Carlo simulation, and the probability of damage detection is estimated using the probabilistic method.