• Title/Summary/Keyword: Monte-Carlo algorithm

Search Result 506, Processing Time 0.028 seconds

DOProC-based reliability analysis of structures

  • Janas, Petr;Krejsa, Martin;Sejnoha, Jiri;Krejsa, Vlastimil
    • Structural Engineering and Mechanics
    • /
    • v.64 no.4
    • /
    • pp.413-426
    • /
    • 2017
  • Probabilistic methods are used in engineering where a computational model contains random variables. The proposed method under development: Direct Optimized Probabilistic Calculation (DOProC) is highly efficient in terms of computation time and solution accuracy and is mostly faster than in case of other standard probabilistic methods. The novelty of the DOProC lies in an optimized numerical integration that easily handles both correlated and statistically independent random variables and does not require any simulation or approximation technique. DOProC is demonstrated by a collection of deliberately selected simple examples (i) to illustrate the efficiency of individual optimization levels and (ii) to verify it against other highly regarded probabilistic methods (e.g., Monte Carlo). Efficiency and other benefits of the proposed method are grounded on a comparative case study carried out using both the DOProC and MC techniques. The algorithm has been implemented in mentioned software applications, and has been used effectively several times in solving probabilistic tasks and in probabilistic reliability assessment of structures. The article summarizes the principles of this method and demonstrates its basic possibilities on simple examples. The paper presents unpublished details of probabilistic computations based on this method, including a reliability assessment, which provides the user with the probability of failure affected by statistically dependent input random variables. The study also mentions the potential of the optimization procedures under development, including an analysis of their effectiveness on the example of the reliability assessment of a slender column.

Effect of Regional Navigation Signals upon an Interference Cancellation Capable GNSS Receiver Performance (지역항법 신호에 의한 위성항법수신기 간섭상쇄 성능영향)

  • Lee, Jang-Yong;Jang, Jae-Gyu;Ahn, Woo-Guen;Seo, Seung-Woo;Lee, Sang-Jeong
    • Journal of Advanced Navigation Technology
    • /
    • v.21 no.3
    • /
    • pp.258-263
    • /
    • 2017
  • This paper analyzed GNSS signal acquisition performance of a regional navigation receiver when an interference cancellation capability is applied. Intereference between the regional navigation and GNSS signal can be mitigated by the interference cancellation technique such as the successive interference cancellation (SIC) algorithm. However signal acquisition performance will be degraded when jamming-to-signal ratio (J/S) is large due to a cross-correlation properties of residual signals. In this paper we analyzed signal acquisition performance degradation due to the interference between the Kasami and GNSS Gold code signal. Monte Carlo simulation is used for the analysis and compared results with those of GNSS Gold code only condition.

Adaptive User Selection in Downlink Multi-User MIMO Networks (다중 사용자 및 다중 안테나 하향링크 네트워크에서 적응적 사용자 선택 기법)

  • Ban, Tae-Won;Jung, Bang Chul
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.17 no.7
    • /
    • pp.1597-1601
    • /
    • 2013
  • Multiple antenna technique is attracting attention as a core technology for next-generation mobile communication systems to accommodate explosively increasing mobile data traffic. Especially, recent researches focus on multi-user multiple input multiple output (MU-MIMO) system where base stations are equipped with several tens of transmit antennas and transmit data to multiple terminals (users) simultaneously. To enhance the performance of MU-MIMO systems, we, in this paper, propose an adaptive user selection algorithm which adaptively selects a user set according to varying channel states. According to Monte-Carlo based computer simulations, the performance of proposed scheme is significantly improved compared to the conventional scheme without user selection and approaches that of exhaustive search-based optimal scheme. On the other hand, the proposed scheme can reduce the computational complexity to $K/(2^K-1)$ compared to the optimal scheme where K denotes the number of total users.

Bayesian Analysis of Dose-Effect Relationship of Cadmium for Benchmark Dose Evaluation (카드뮴 반응용량 곡선에서의 기준용량 평가를 위한 베이지안 분석연구)

  • Lee, Minjea;Choi, Taeryon;Kim, Jeongseon;Woo, Hae Dong
    • The Korean Journal of Applied Statistics
    • /
    • v.26 no.3
    • /
    • pp.453-470
    • /
    • 2013
  • In this paper, we consider a Bayesian analysis of the dose-effect relationship of cadmium to evaluate a benchmark dose(BMD). For this purpose, two dose-response curves commonly used in the toxicity study are fitted based on Bayesian methods to the data collected from the scientific literature on cadmium toxicity. Specifically, Bayesian meta-analysis and hierarchical modeling build an overall dose-effect relationship that use a piecewise linear model and Hill model, where the inter-study heterogeneity and inter-individual variability of dose and effect such as gender, age and ethnicity are accounted. Estimation of the unknown parameters is made by using a Markov chain Monte Carlo algorithm based user-friendly software WinBUGS. Benchmark dose estimates are evaluated for various cut-offs and compared with different tested subpopulations with with gender, age and ethnicity based on these two Bayesian hierarchical models.

A study on slim-hole neutron logging based on numerical simulation (소구경 시추공에서의 중성자검층 수치모델링 연구)

  • Ku, Bonjin;Nam, Myung Jin
    • Geophysics and Geophysical Exploration
    • /
    • v.15 no.4
    • /
    • pp.219-226
    • /
    • 2012
  • This study provides an analysis on results of neutron logging for various borehole environments through numerical simulation based on a Monte Carlo N-Particle (MCNP) code developed and maintained by Los Alamos National Laboratory. MCNP is suitable for the simulation of neutron logging since the algorithm can simulate transport of nuclear particles in three-dimensional geometry. Rather than simulating a specific tool of a particular service company between many commercial neutron tools, we have constructed a generic thermal neutron tool characterizing commercial tools. This study makes calibration chart of the neutron logging tool for materials (e.g., limestone, sandstone and dolomite) with various porosities. Further, we provides correction charts for the generic neutron logging tool to analyze responses of the tool under various borehole conditions by considering brine-filled borehole fluid and void water, and presence of borehole fluid.

$\pi$/4 shift QPSK with Trellis-Code and Lth Phase Different Metrics (Trellis 부호와 L번째 위상차 메트릭(metrics)을 갖는$\pi$/4 shift QPSK)

  • 김종일;강창언
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.17 no.10
    • /
    • pp.1147-1156
    • /
    • 1992
  • In this paper, in order to apply the $\pi/4$ shift QPSK to TCM, we propose the $\pi/8$ shift 8PSK modulation technique and the trellis-coded $\pi/8$ shift 8PSK performing signal set expansion and partition by phase difference. In addition, the Viterbi decoder with branch metrics of the squared Euclidean distance of the first phase difference as well as the Lth phase different is introduced in order to improve the bit error rate(BER) performance in differential detection of the trellis-coded $\pi/8$ shift 8PSK. The proposed Viterbi decoder is conceptually the same as the sliding multiple detection by using the branch metric with first and Lth order phase difference. We investigate the performance of the uncoded $\pi/4$ shift QPSK and the trellis-coded $\pi/8$ shift 8PSK with or without the Lth phase difference metric in an additive white Gaussian noise (AWGN) using the Monte Carlo simulation. The study shows that the $\pi/4$ shift QPSK with the Trellis-code i.e. the trellis-coded $\pi/8$ shift 8PSK is an attractive scheme for power and bandlimited systems and especially, the Viterbi decoder with first and Lth phase difference metrics improves BER performance. Also, the nest proposed algorithm can be used in the TC $\pi/8$ shift 8PSK as well as TCMDPSK.

  • PDF

Quantitative Analysis of Bayesian SPECT Reconstruction : Effects of Using Higher-Order Gibbs Priors

  • S. J. Lee
    • Journal of Biomedical Engineering Research
    • /
    • v.19 no.2
    • /
    • pp.133-142
    • /
    • 1998
  • In Bayesian SPECT reconstruction, the incorporation of elaborate forms of priors can lead to improved quantitative performance in various statistical terms, such as bias and variance. In particular, the use of higher-order smoothing priors, such as the thin-plate prior, is known to exhibit improved bias behavior compared to the conventional smoothing priors such as the membrane prior. However, the bias advantage of the higher-order priors is effective only when the hyperparameters involved in the reconstruction algorithm are properly chosen. In this work, we further investigate the quantitative performance of the two representative smoothing priors-the thin plate and the membrane-by observing the behavior of the associated hyperparameters of the prior distributions. In our experiments we use Monte Carlo noise trials to calculate bias and variance of reconstruction estimates, and compare the performance of ML-EM estimates to that of regularized EM using both membrane and thin-plate priors, and also to that of filtered backprojection, where the membrane and thin plate models become simple apodizing filters of specified form. We finally show that the use of higher-order models yields excellent "robustness" in quantitative performance by demonstrating that the thin plate leads to very low bias error over a large range of hyperparameters, while keeping a reasonable variance. variance.

  • PDF

Comparison of Recombination Methods ad Cooling Factors in Genetic Algorithms Applied to Folding of Protein Model System

  • U, Su Hyeong;Kim, Du Il;Jeong, Seon Hui
    • Bulletin of the Korean Chemical Society
    • /
    • v.21 no.3
    • /
    • pp.281-290
    • /
    • 2000
  • We varied recombination method of fenetic algorithm (GA), i.e., crossover step, to compare efficiency of these methods, and to find more optimum GA method. In one method (A), we select two conformations(parents) to be recombined by systematic combination of lowest energy conformations, and in the other (B), we select them in a ratio proportional to the energy of the conformation. Second variation lies in how to select crossover point. First, we select it randomly(1). Second, we select range of residues where internal energy of the molecule does not vary for more than two residues, select randomly among such regions, and we select either thr first (2a) or the second residue (2b) from the N-terminal side, or the first (2c) or the second residue (2d) from the C-terminal side in the selected region for crossover point. Third, we select longest such hregion, and select such residue(as cases 2) (3a, 3b, 3c or 3d) of the region. These methods were tested in a 2-dimensionl lattice system for 8 different sequences (the same ones used by Unger and Moult., 1993). Results show that compared to Unger and Moult's result(UM) which corresponds to B-1 case, our B-1 case performed similarly in overall. There are many cases where our new methods performed better than UM for some different sequences. When cooling factor affecting higher energy conformation to be accepted in Monte Carlo step was reduced, our B-1 and other cases performed better than UM; we found lower energy conformers, and found same energy conformers in a smaller steps. We discuss importance of cooling factor variation in Monte Carlo simulations of protein folding for different proteins. (A) method tends to find the minimum conformer faster than (B) method, and (3) method is superior or at least equal to (1) method.

Neural network based numerical model updating and verification for a short span concrete culvert bridge by incorporating Monte Carlo simulations

  • Lin, S.T.K.;Lu, Y.;Alamdari, M.M.;Khoa, N.L.D.
    • Structural Engineering and Mechanics
    • /
    • v.81 no.3
    • /
    • pp.293-303
    • /
    • 2022
  • As infrastructure ages and traffic load increases, serious public concerns have arisen for the well-being of bridges. The current health monitoring practice focuses on large-scale bridges rather than short span bridges. However, it is critical that more attention should be given to these behind-the-scene bridges. The relevant information about the construction methods and as-built properties are most likely missing. Additionally, since the condition of a bridge has unavoidably changed during service, due to weathering and deterioration, the material properties and boundary conditions would also have changed since its construction. Therefore, it is not appropriate to continue using the design values of the bridge parameters when undertaking any analysis to evaluate bridge performance. It is imperative to update the model, using finite element (FE) analysis to reflect the current structural condition. In this study, a FE model is established to simulate a concrete culvert bridge in New South Wales, Australia. That model, however, contains a number of parameter uncertainties that would compromise the accuracy of analytical results. The model is therefore updated with a neural network (NN) optimisation algorithm incorporating Monte Carlo (MC) simulation to minimise the uncertainties in parameters. The modal frequency and strain responses produced by the updated FE model are compared with the frequency and strain values on-site measured by sensors. The outcome indicates that the NN model updating incorporating MC simulation is a feasible and robust optimisation method for updating numerical models so as to minimise the difference between numerical models and their real-world counterparts.

Using Artificial Neural Network in the reverse design of a composite sandwich structure

  • Mortda M. Sahib;Gyorgy Kovacs
    • Structural Engineering and Mechanics
    • /
    • v.85 no.5
    • /
    • pp.635-644
    • /
    • 2023
  • The design of honeycomb sandwich structures is often challenging because these structures can be tailored from a variety of possible cores and face sheets configurations, therefore, the design of sandwich structures is characterized as a time-consuming and complex task. A data-driven computational approach that integrates the analytical method and Artificial Neural Network (ANN) is developed by the authors to rapidly predict the design of sandwich structures for a targeted maximum structural deflection. The elaborated ANN reverse design approach is applied to obtain the thickness of the sandwich core, the thickness of the laminated face sheets, and safety factors for composite sandwich structure. The required data for building ANN model were obtained using the governing equations of sandwich components in conjunction with the Monte Carlo Method. Then, the functional relationship between the input and output features was created using the neural network Backpropagation (BP) algorithm. The input variables were the dimensions of the sandwich structure, the applied load, the core density, and the maximum deflection, which was the reverse input given by the designer. The outstanding performance of reverse ANN model revealed through a low value of mean square error (MSE) together with the coefficient of determination (R2) close to the unity. Furthermore, the output of the model was in good agreement with the analytical solution with a maximum error 4.7%. The combination of reverse concept and ANN may provide a potentially novel approach in designing of sandwich structures. The main added value of this study is the elaboration of a reverse ANN model, which provides a low computational technique as well as savestime in the design or redesign of sandwich structures compared to analytical and finite element approaches.