• Title/Summary/Keyword: Monte Carlo Approach

Search Result 597, Processing Time 0.029 seconds

Optimization-based method for structural damage detection with consideration of uncertainties- a comparative study

  • Ghiasi, Ramin;Ghasemi, Mohammad Reza
    • Smart Structures and Systems
    • /
    • v.22 no.5
    • /
    • pp.561-574
    • /
    • 2018
  • In this paper, for efficiently reducing the computational cost of the model updating during the optimization process of damage detection, the structural response is evaluated using properly trained surrogate model. Furthermore, in practice uncertainties in the FE model parameters and modelling errors are inevitable. Hence, an efficient approach based on Monte Carlo simulation is proposed to take into account the effect of uncertainties in developing a surrogate model. The probability of damage existence (PDE) is calculated based on the probability density function of the existence of undamaged and damaged states. The current work builds a framework for Probability Based Damage Detection (PBDD) of structures based on the best combination of metaheuristic optimization algorithm and surrogate models. To reach this goal, three popular metamodeling techniques including Cascade Feed Forward Neural Network (CFNN), Least Square Support Vector Machines (LS-SVMs) and Kriging are constructed, trained and tested in order to inspect features and faults of each algorithm. Furthermore, three wellknown optimization algorithms including Ideal Gas Molecular Movement (IGMM), Particle Swarm Optimization (PSO) and Bat Algorithm (BA) are utilized and the comparative results are presented accordingly. Furthermore, efficient schemes are implemented on these algorithms to improve their performance in handling problems with a large number of variables. By considering various indices for measuring the accuracy and computational time of PBDD process, the results indicate that combination of LS-SVM surrogate model by IGMM optimization algorithm have better performance in predicting the of damage compared with other methods.

Central Composite Design Matrix (CCDM) for Phthalocyanine Reactive Dyeing of Nylon Fiber: Process Analysis and Optimization

  • Ravikumar, K.;Kim, Byung-Soon;Son, Young-A
    • Textile Coloration and Finishing
    • /
    • v.20 no.2
    • /
    • pp.19-28
    • /
    • 2008
  • The objective of this study was to apply the statistical technique known as design of experiments to optimize the % exhaustion variables for phthalocyanine dyeing of nylon fiber. In this study, a three-factor Central Composite Rotatable Design (CCRD) was used to establish the optimum conditions for the phthalocyanine reactive dyeing of nylon fiber. Temperature, pH and liquor ratio were considered as the variable of interest. Acidic solution with higher temperature and lower liquor ratio were found to be suitable conditions for higher % exhaustion. These three variables were used as independent variables, whose effects on % exhaustion were evaluated. Significant polynomial regression models describing the changes on % exhaustion and % fixation with respect to independent variables were established with coefficient of determination, R2, greater than 0.90. Close agreement between experimental and predicted yields was obtained. Optimum conditions were obtained using surface plots and Monte Carlo simulation techniques where maximum dyeing efficiency is achieved. The significant level of both the main effects and interaction was observed by analysis of variance (ANOVA) approach. Based on the statistical analysis, the results have provided much valuable information on the relationship between response variables and independent variables. This study demonstrates that the CCRD could be efficiently applied for the empirical modeling of % exhaustion and % fixation in dyeing. It also shows that it is an economical way of obtaining the maximum amount of information in a short period of time with least number of experiments.

Intentional GNSS Interference Detection and Characterization Algorithm Using AGC and Adaptive IIR Notch Filter

  • Yang, Jeong Hwan;Kang, Chang Ho;Kim, Sun Young;Park, Chan Gook
    • International Journal of Aeronautical and Space Sciences
    • /
    • v.13 no.4
    • /
    • pp.491-498
    • /
    • 2012
  • A Ground Based Augmentation System (GBAS) is an enabling technology for an aircraft's precision approach based on a Global Navigation Satellite System (GNSS). However, GBAS is vulnerable to interference, so effective GNSS interference detection and mitigation methods need to be employed. In this paper, an intentional GNSS interference detection and characterization algorithm is proposed. The algorithm uses Automatic Gain Control (AGC) gain and adaptive notch filter parameters to classify types of incoming interference and to characterize them. The AGC gain and adaptive lattice IIR notch filter parameter values in GNSS receivers are examined according to interference types and power levels. Based on those data, the interference detection and characterization algorithm is developed and Monte Carlo simulations are carried out for performance analysis of the proposed method. Here, the proposed algorithm is used to detect and characterize single-tone continuous wave interference, swept continuous wave interference, and band-limited white Gaussian noise. The algorithm can be used for GNSS interference monitoring in an excessive Radio Frequency Interference environment which causes loss of receiver tracking. This interference detection and characterization algorithm will be used to enhance the interference mitigation algorithm.

Stochastic Simulation of Groundwater Flow in Heterogeneous Formations: a Virtual Setting via Realizations of Random Field (불균질지층내 지하수 유동의 확률론적 분석 : 무작위성 분포 재생을 통한 가상적 수리시험)

  • Lee, Kang-Kun
    • Journal of the Korean Society of Groundwater Environment
    • /
    • v.1 no.2
    • /
    • pp.90-99
    • /
    • 1994
  • Heterogeneous hydraulic conductivity in a flow domain is generated under the assumption that it is a random variable with a lognormal, spatially-correlated distribution. The hydraulic head and the conductivity in a groundwater flow system are represented as a stochastic process. The method of Monte Carlo Simulation (MCS) and the finite element method (FEM) are used to determine the statistics of the head and the logconductivity. The second moments of the head and the logconductivity indicate that the cross-covariance of the logconductivity with the head has characteristic distribution patterns depending on the properties of sources, boundary conditions, head gradients, and correlation scales. The negative cross-correlation outlines a weak-response zone where the flow system is weakly responding to a stress change in the flow domain. The stochastic approach has a potential to quantitatively delineate the zone of influence through computations of the cross-covariance distribution.

  • PDF

An Application of Realistic Evaluation Methodology for Large Break LOCA of Westinghouse 3 Loop Plant

  • Choi, Han-Rim;Hwang, Tae-Suk;Chung, Bub-Dong;Jun, Hwang-Yong;Lee, Chang-Sub
    • Proceedings of the Korean Nuclear Society Conference
    • /
    • 1996.05b
    • /
    • pp.513-518
    • /
    • 1996
  • This report presents a demonstration of application of realistic evaluation methodology to a posturated cold leg large break LOCA in a Westinghouse three-loop pressurized water reactor with 17$\times$17 fuel. The new method of this analysis can be divided into three distinct step: 1) Best Estimate Code Validation and Uncertainty Quantification 2) Realistic LOCA Calculation 3) Limiting Value LOCA Calculation and Uncertainty Combination RELAP5/MOD3/K [1], which was improved from RELAP5/MOD3.1, and CONTEMPT4/MOD5 code were used as a best estimate thermal-hydraulic model for realistic LOCA calculation. The code uncertainties which will be determined in step 1) were quantified already in previous study [2], and thus the step 2) and 3) for plant application were presented in this paper. The application uncertainty parameters are divided into two categories, i.e. plant system parameters and fuel statistical parameters. Single parameter sensitivity calculations were performed to select system parameters which would be set at their limiting value in Limiting Value Approach (LVA) calculation. Single run of LVA calculation generated 27 PCT data according to the various combinations of fuel parameters and these data provided input to response surface generation. The probability distribution function was generated from Monte Carlo sampling of a response surface and the upper 95$^{th}$ percentile PCT was determined. Break spectrum analysis was also made to determine the critical break size. The results show that sufficient LOCA margin can be obtained for the demonstration NPP.

  • PDF

Effect of Probability Distribution of Coefficient of Consolidation on Probabilistic Analysis of Consolidation in Heterogeneous Soil (비균질 지반에서 압밀계수의 확률분포가 압밀의 확률론적 해석에 미치는 영향)

  • Bong, Tae-Ho;Heo, Joon;Son, Young-Hwan
    • Journal of The Korean Society of Agricultural Engineers
    • /
    • v.60 no.3
    • /
    • pp.63-70
    • /
    • 2018
  • In this study, a simple probabilistic approach using equivalent coefficient of consolidation ($c_e$) was proposed to consider the spatial variability of coefficient of vertical consolidation ($c_v$), and the effect of the probability distribution of coefficient of consolidation on degree of consolidation in heterogeneous soil was investigated. The statistical characteristics of consolidation coefficient were estimated from 1,226 field data, and four probability distributions (Normal, Log-normal, Gamma, and Weibull) were applied to consider the effect of probability distribution. The random fields of coefficient of consolidation were generated based on Karhunen-Loeve expansion. Then, the equivalent coefficient of consolidation was calculated from the random field and used as the input value of consolidation analysis. As a result, the probabilistic analysis can be performed effectively by separating random field and numerical analysis, and probabilistic analysis was performed using a Latin hypercube Monte Carlo simulation. The results showed that the statistical properties of $c_e$ were changed by the probability distribution and spatial variability of $c_v$, and the probability distribution of $c_v$ has considerable effects on the probabilistic results. There was a large difference of failure probability depend on the probability distribution when the autocorrelation distance was small (i.e., highly heterogeneous soil). Therefore, the selection of a suitable probability distribution of $c_v$ is very important for reliable probabilistic analysis of consolidation.

Probabilistic Security Analysis in Composite Power System Reliability (복합전력계통 신뢰도평가에 있어서 확률론적 안전도연구)

  • Kim, H.;Cha, J.;Kim, J.O.;Kwon, S.
    • Proceedings of the KIEE Conference
    • /
    • 2005.11b
    • /
    • pp.46-48
    • /
    • 2005
  • This paper discusses a probabilistic method for power system security assessment. The security analysis relates to the ability of the electric power systems to survive sudden disturbances such as electric short circuits or unanticipated loss of system elements. It consists of both steady state and dynamic security analyses, which are not two separate issues but should be considered together. In steady state security analysis including voltage security analysis, the analysis checks that the system is operated within security limits by OPF (optimal power flow) after the transition to a new operating point. Until now, many utilities have difficulty in including dynamic aspects due to computational capabilities. On the other hand. dynamic security analysis is required to ensure that the transition may lead to an acceptable operating condition. Transient stability, which is the ability of power systems to maintain synchronism when subjected to a large disturbance. is a principal component in dynamic security analysis. Usually any loss of synchronism may cause additional outages and make the present steady state analysis of the post-contingency condition inadequate for unstable cases. This is the reason for the need of dynamic studies in power systems. Probabilistic criterion can be used to recognize the probabilistic nature of system components while considering system security. In this approach. we do not have to assign any predetermined margin of safety. A comprehensive conceptual framework for probabilistic static and dynamic assessment is presented in this paper. The simulation results of the Western System Coordinating Council (WSCC) system compare an analytical method with Monte-Carlo simulation (MCS).

  • PDF

A Bayesian zero-inflated Poisson regression model with random effects with application to smoking behavior (랜덤효과를 포함한 영과잉 포아송 회귀모형에 대한 베이지안 추론: 흡연 자료에의 적용)

  • Kim, Yeon Kyoung;Hwang, Beom Seuk
    • The Korean Journal of Applied Statistics
    • /
    • v.31 no.2
    • /
    • pp.287-301
    • /
    • 2018
  • It is common to encounter count data with excess zeros in various research fields such as the social sciences, natural sciences, medical science or engineering. Such count data have been explained mainly by zero-inflated Poisson model and extended models. Zero-inflated count data are also often correlated or clustered, in which random effects should be taken into account in the model. Frequentist approaches have been commonly used to fit such data. However, a Bayesian approach has advantages of prior information, avoidance of asymptotic approximations and practical estimation of the functions of parameters. We consider a Bayesian zero-inflated Poisson regression model with random effects for correlated zero-inflated count data. We conducted simulation studies to check the performance of the proposed model. We also applied the proposed model to smoking behavior data from the Regional Health Survey (2015) of the Korea Centers for disease control and prevention.

An Application of Dirichlet Mixture Model for Failure Time Density Estimation to Components of Naval Combat System (디리슈레 혼합모형을 이용한 함정 전투체계 부품의 고장시간 분포 추정)

  • Lee, Jinwhan;Kim, Jung Hun;Jung, BongJoo;Kim, Kyeongtaek
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.42 no.4
    • /
    • pp.194-202
    • /
    • 2019
  • Reliability analysis of the components frequently starts with the data that manufacturer provides. If enough failure data are collected from the field operations, the reliability should be recomputed and updated on the basis of the field failure data. However, when the failure time record for a component contains only a few observations, all statistical methodologies are limited. In this case, where the failure records for multiple number of identical components are available, a valid alternative is combining all the data from each component into one data set with enough sample size and utilizing the useful information in the censored data. The ROK Navy has been operating multiple Patrol Killer Guided missiles (PKGs) for several years. The Korea Multi-Function Control Console (KMFCC) is one of key components in PKG combat system. The maintenance record for the KMFCC contains less than ten failure observations and a censored datum. This paper proposes a Bayesian approach with a Dirichlet mixture model to estimate failure time density for KMFCC. Trends test for each component record indicated that null hypothesis, that failure occurrence is renewal process, is not rejected. Since the KMFCCs have been functioning under different operating environment, the failure time distribution may be a composition of a number of unknown distributions, i.e. a mixture distribution, rather than a single distribution. The Dirichlet mixture model was coded as probabilistic programming in Python using PyMC3. Then Markov Chain Monte Carlo (MCMC) sampling technique employed in PyMC3 probabilistically estimated the parameters' posterior distribution through the Dirichlet mixture model. The simulation results revealed that the mixture models provide superior fits to the combined data set over single models.

Traffic Asymmetry Balancing in OFDMA-TDD Cellular Networks

  • Foutekova, Ellina;Sinanovic, Sinan;Haas, Harald
    • Journal of Communications and Networks
    • /
    • v.10 no.2
    • /
    • pp.137-147
    • /
    • 2008
  • This paper proposes a novel approach to interference avoidance via inter-cell relaying in cellular OFDMA-TDD (orthogonal frequency division multiple access - time division duplex) systems. The proposed scheme, termed asymmetry balancing, is targeted towards next-generation cellular wireless systems which are envisaged to have ad hoc and multi-hop capabilities. Asymmetry balancing resolves the detrimental base station (BS)-to-BS interference problem inherent to TDD networks by synchronizing the TDD switching points (SPs) across cells. In order to maintain the flexibility of TDD in serving the asymmetry demands of individual cells, inter-cell relaying is employed. Mobile stations (MSs) in a cell which has a shortage of uplink (UL) resources and spare downlink (DL) resources use free DL resources to off-load UL traffic to cooperating MSs in a neighboring cell using ad hoc communication. In an analogous fashion DL traffic can be balanced. The purpose of this paper is to introduce the asymmetry balancing concept by considering a seven-cell cluster and a single overloaded cell in the center. A mathematical model is developed to quantify the envisaged gains in using asymmetry balancing and is verified via Monte Carlo simulations. It is demonstrated that asymmetry balancing offers great flexibility in UL-DL resource allocation. In addition, results show that a spectral efficiency improvement of more than 100% can be obtained with respect to a case where the TDD SPs are adapted to the cell-specific demands.