• Title/Summary/Keyword: Methods: analytical, statistical, numerical

Search Result 8, Processing Time 0.021 seconds

Efficient simulation using saddlepoint approximation for aggregate losses with large frequencies

  • Cho, Jae-Rin;Ha, Hyung-Tae
    • Communications for Statistical Applications and Methods
    • /
    • v.23 no.1
    • /
    • pp.85-91
    • /
    • 2016
  • Aggregate claim amounts with a large claim frequency represent a major concern to automobile insurance companies. In this paper, we show that a new hybrid method to combine the analytical saddlepoint approximation and Monte Carlo simulation can be an efficient computational method. We provide numerical comparisons between the hybrid method and the usual Monte Carlo simulation.

Robust Optimal Design Method Using Two-Point Diagonal Quadratic Approximation and Statistical Constraints (이점 대각 이차 근사화 기법과 통계적 제한조건을 적용한 강건 최적설계 기법)

  • Kwon, Yong-Sam;Kim, Min-Soo;Kim, Jong-Rip;Choi, Dong-Hoon
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.26 no.12
    • /
    • pp.2483-2491
    • /
    • 2002
  • This study presents an efficient method for robust optimal design. In order to avoid the excessive evaluations of the exact performance functions, two-point diagonal quadratic approximation method is employed for approximating them during optimization process. This approximation method is one of the two point approximation methods. Therefore, the second order sensitivity information of the approximated performance functions are calculated by an analytical method. As a result, this enables one to avoid the expensive evaluations of the exact $2^{nd}$ derivatives of the performance functions unlike the conventional robust optimal design methods based on the gradient information. Finally, in order to show the numerical performance of the proposed method, one mathematical problem and two mechanical design problems are solved and their results are compared with those of the conventional methods.

Statistical Analysis of Degradation Data under a Random Coefficient Rate Model (확률계수 열화율 모형하에서 열화자료의 통계적 분석)

  • Seo, Sun-Keun;Lee, Su-Jin;Cho, You-Hee
    • Journal of Korean Society for Quality Management
    • /
    • v.34 no.3
    • /
    • pp.19-30
    • /
    • 2006
  • For highly reliable products, it is difficult to assess the lifetime of the products with traditional life tests. Accordingly, a recent approach is to observe the performance degradation of product during the test rather than regular failure time. This study compares performances of three methods(i.e. the approximation, analytical and numerical methods) to estimate the parameters and quantiles of the lifetime when the time-to-failure distribution follows Weibull and lognormal distributions under a random coefficient degradation rate model. Numerical experiments are also conducted to investigate the effects of model error such as measurements in a random coefficient model.

Comparative Study on the Applicability of Point Estimate Methods in Combination with Numerical Analysis for the Probabilistic Reliability Assessment of Underground Structures (수치해석과 연계한 지하구조물의 확률론적 신뢰성 평가를 위한 점추정법의 적용성에 관한 비교 연구)

  • Park, Do-Hyun;Kim, Hyung-Mok;Ryu, Dong-Woo;Choi, Byung-Hee;Han, Kong-Chang
    • Tunnel and Underground Space
    • /
    • v.22 no.2
    • /
    • pp.86-92
    • /
    • 2012
  • Point estimate method has a less accuracy than Monte Carlo simulation that is usually considered as an exact probabilistic method, but this method still remains popular in probability-based reliability assessment in geotechnical and rock engineering, because it significantly reduce the number of sampling points and produces the statistical moments of a performance function in a reasonable accuracy. In the present study, we investigated the accuracy and applicability of point estimate methods proposed by Rosenblueth and Zhou & Nowak by comparing the results of these two methods with those of Monte Carlo simulations. The comparison was carried out for the problem of a lined circular tunnel in an elastic medium where an closed-form analytical solution is given. The comparison results showed that despite the non-linearity of the analytical solution, the statistical moments calculated by the point estimate methods and the Monte Carlo simulations agreed well with an average error of roughly 1-2%. This average error demonstrates the applicability of the two point estimate methods for the probabilistic reliability assessment of underground structures in combination with numerical analysis.

Sire Evaluation of Count Traits with a Poisson-Gamma Hierarchical Generalized Linear Model

  • Lee, C.;Lee, Y.
    • Asian-Australasian Journal of Animal Sciences
    • /
    • v.11 no.6
    • /
    • pp.642-647
    • /
    • 1998
  • A Poisson error model as a generalized linear mixed model (GLMM) has been suggested for genetic analysis of counted observations. One of the assumptions in this model is the normality for random effects. Since this assumption is not always appropriate, a more flexible model is needed. For count traits, a Poisson hierarchical generalized linear model (HGLM) that does not require the normality for random effects was proposed. In this paper, a Poisson-Gamma HGLM was examined along with corresponding analytical methods. While a difficulty arises with Poisson GLMM in making inferences to the expected values of observations, it can be avoided with the Poisson-Gamma HGLM. A numerical example with simulated embryo yield data is presented.

Deep learning in nickel-based superalloys solvus temperature simulation

  • Dmitry A., Tarasov;Andrey G., Tyagunov;Oleg B., Milder
    • Advances in aircraft and spacecraft science
    • /
    • v.9 no.5
    • /
    • pp.367-375
    • /
    • 2022
  • Modeling the properties of complex alloys such as nickel superalloys is an extremely challenging scientific and engineering task. The model should take into account a large number of uncorrelated factors, for many of which information may be missing or vague. The individual contribution of one or another chemical element out of a dozen possible ligants cannot be determined by traditional methods. Moreover, there are no general analytical models describing the influence of elements on the characteristics of alloys. Artificial neural networks are one of the few statistical modeling tools that can account for many implicit correlations and establish correspondences that cannot be identified by other more familiar mathematical methods. However, such networks require careful tuning to achieve high performance, which is time-consuming. Data preprocessing can make model training much easier and faster. This article focuses on combining physics-based deep network configuration and input data engineering to simulate the solvus temperature of nickel superalloys. The used deep artificial neural network shows good simulation results. Thus, this method of numerical simulation can be easily applied to such problems.

Development of a Criterion for Efficient Numerical Calculation of Structural Vibration Responses

  • Kim, Woonkyung M.;Kim, Jeung-Tae;Kim, Jung-Soo
    • Journal of Mechanical Science and Technology
    • /
    • v.17 no.8
    • /
    • pp.1148-1155
    • /
    • 2003
  • The finite element method is one of the methods widely applied for predicting vibration in mechanical structures. In this paper, the effect of the mesh size of the finite element model on the accuracy of the numerical solutions of the structural vibration problems is investigated with particular focus on obtaining the optimal mesh size with respect to the solution accuracy and computational cost. The vibration response parameters of the natural frequency, modal density, and driving point mobility are discussed. For accurate driving point mobility calculation, the decay method is employed to experimentally determine the internal damping. A uniform plate simply supported at four corners is examined in detail, in which the response parameters are calculated by constructing finite element models with different mesh sizes. The accuracy of the finite element solutions of these parameters is evaluated by comparing with the analytical results as well as estimations based on the statistical energy analysis, or if not available, by testing the numerical convergence. As the mesh size becomes smaller than one quarter of the wavelength of the highest frequency of interest, the solution accuracy improvement is found to be negligible, while the computational cost rapidly increases. For mechanical structures, the finite element analysis with the mesh size of the order of quarter wavelength, combined with the use of the decay method for obtaining internal damping, is found to provide satisfactory predictions for vibration responses.

COSMOLOGY WITH MASSIVE NEUTRINOS: CHALLENGES TO THE STANDARD ΛCDM PARADIGM

  • ROSSI, GRAZIANO
    • Publications of The Korean Astronomical Society
    • /
    • v.30 no.2
    • /
    • pp.321-325
    • /
    • 2015
  • Determining the absolute neutrino mass scale and the neutrino mass hierarchy are central goals in particle physics, with important implications for the Standard Model. However, the final answer may come from cosmology, as laboratory experiments provide measurements for two of the squared mass differences and a stringent lower bound on the total neutrino mass - but the upper bound is still poorly constrained, even when considering forecasted results from future probes. Cosmological tracers are very sensitive to neutrino properties and their total mass, because massive neutrinos produce a specific redshift-and scale-dependent signature in the power spectrum of the matter and galaxy distributions. Stringent upper limits on ${\sum}m_v$ will be essential for understanding the neutrino sector, and will nicely complement particle physics results. To this end, we describe here a series of cosmological hydrodynamical simulations which include massive neutrinos, specifically designed to meet the requirements of the Baryon Acoustic Spectroscopic Survey (BOSS) and focused on the Lyman-${\alpha}$ ($Ly{\alpha}$) forest - also a useful theoretical ground for upcoming surveys such as SDSS-IV/eBOSS and DESI. We then briefly highlight the remarkable constraining power of the $Ly{\alpha}$ forest in terms of the total neutrino mass, when combined with other state-of-the-art cosmological probes, leaving to a stringent upper bound on ${\sum}m_v$.