• Title/Summary/Keyword: statistical probability distribution functions

Search Result 70, Processing Time 0.029 seconds

Influence of overload on the fatigue crack growth retardation and the statistical variation (강의 피로균열지연거동에 미치는 과대하중의 영향과 통계적 변동에 관한 연구)

  • 김선진;남기우;김종훈;이창용;박은희;서상하
    • Journal of Ocean Engineering and Technology
    • /
    • v.11 no.3
    • /
    • pp.76-88
    • /
    • 1997
  • Constant .DELTA.K fatigue crack growth rate experiments were performed by applying an intermediate single and multiple overload for structural steel, SM45C. The purpose of the present study is to investigate the influence of multiple overloads at various stress intensity factor ranges and the effect of statistical variability of crack retardation behavior. The normalized delayed load cycle, delayed crack length and the minimum crack growth rate are increased with increasing baseline stress intensity factor range when the overload ratio and the number of overload application were constant. The crack retardation under low baseline stress intensity factor range increases by increasing the number of overload application, but the minimum crack growth rate decreases by increasing the number of overload application. A strong linear correlation exists between the minimum crack growth rate and the number of overload applications. And, it was observed that the variability in the crack growth retardation behavior are presented, the probability distribution functions of delayed load cycle, delayed crack length and crack growth life are 2-parameter Weibull. The coefficient of variation of delayed load cycle and delayed crack length for the number of 10 overload applications data are 14.8 and 9.2%, respectively.

  • PDF

Identification of the associations between genes and quantitative traits using entropy-based kernel density estimation

  • Yee, Jaeyong;Park, Taesung;Park, Mira
    • Genomics & Informatics
    • /
    • v.20 no.2
    • /
    • pp.17.1-17.11
    • /
    • 2022
  • Genetic associations have been quantified using a number of statistical measures. Entropy-based mutual information may be one of the more direct ways of estimating the association, in the sense that it does not depend on the parametrization. For this purpose, both the entropy and conditional entropy of the phenotype distribution should be obtained. Quantitative traits, however, do not usually allow an exact evaluation of entropy. The estimation of entropy needs a probability density function, which can be approximated by kernel density estimation. We have investigated the proper sequence of procedures for combining the kernel density estimation and entropy estimation with a probability density function in order to calculate mutual information. Genotypes and their interactions were constructed to set the conditions for conditional entropy. Extensive simulation data created using three types of generating functions were analyzed using two different kernels as well as two types of multifactor dimensionality reduction and another probability density approximation method called m-spacing. The statistical power in terms of correct detection rates was compared. Using kernels was found to be most useful when the trait distributions were more complex than simple normal or gamma distributions. A full-scale genomic dataset was explored to identify associations using the 2-h oral glucose tolerance test results and γ-glutamyl transpeptidase levels as phenotypes. Clearly distinguishable single-nucleotide polymorphisms (SNPs) and interacting SNP pairs associated with these phenotypes were found and listed with empirical p-values.

A study on the analysis of the failure probability based on the concept of loss probability (결손확률모델에 의한 파손확률 해석에 관한 연구)

  • 신효철
    • Transactions of the Korean Society of Mechanical Engineers
    • /
    • v.15 no.6
    • /
    • pp.2037-2047
    • /
    • 1991
  • Strength is not simply a single given value but rather is a statistical one with certain distribution functions. This is because it is affected by many unknown factors such as size, shape, stress distribution, and combined stresses. In this study, a model of loss probability is proposed in view of the fact that one of the fundamental configuration of nature is hexagonal, for example, the shapes of lattice unit, grain, and so on. The model sues the concept of loss of certain element in place of Jayatilaka-Trustrum's length and angle of cracks. Using this model, the loss probability due to each loss of certain elements is obtained. Then, the maximum principal stress is calculated by the finite element method at the centroid of the elements under the tensile load for the 4,095 models of analysis. Finally, the failure probability of the brittle materials is obtained by multiplying the loss probability by the ratio of the maximum principal stress to theoretical tensile strength. Comparison of the result of the Jayatilaka-Trustrum's model and the proposed model shows that the failure probabilities by the two methods are in good agreement. Further, it is shown that the parametric relationship of semi-crack lengths for various degrees of birittleness can be determined. Therefore, the analysis of the failure probability suing the proposed model is shown to be promising as a new method for the study of the failure probability of birttle materials.

Adjusted ROC and CAP Curves (조정된 ROC와 CAP 곡선)

  • Hong, Chong-Sun;Kim, Ji-Hun;Choi, Jin-Soo
    • The Korean Journal of Applied Statistics
    • /
    • v.22 no.1
    • /
    • pp.29-39
    • /
    • 2009
  • Among others, ROC and CAP curves are used to explore the discriminatory power between the defaults and non-defaults, based on the distribution of the probability of default in credit rating works. ROC and CAP curves are plotted in terms of various ratios of the probability of default. Each point on ROC and CAP curves is calculated according to cutting points (scores) for classifying between defaults and non-defaults. In this paper, adjusted ROC and CAP curves are proposed by using functions of ratios of the probability of default. It is possible to recognize the score corresponding to a point oil these adjusted curves, and we can identify the best score to show the optimal discriminatory power. Moreover, we discuss the relationships between the best score obtained from the adjusted ROC and CAP curves and the score corresponding to Kolmogorov - Smirnov statistic to test the homogeneous distribution functions of the defaults and non-defaults.

Degradation reliability modeling of plain concrete for pavement under flexural fatigue loading

  • Jia, Yanshun;Liu, Guoqiang;Yang, Yunmeng;Gao, Ying;Yang, Tao;Tang, Fanlong
    • Advances in concrete construction
    • /
    • v.9 no.5
    • /
    • pp.469-478
    • /
    • 2020
  • This study aims to establish a new methodological framework for the evaluation of the evolution of the reliability of plain concrete for pavement vs number of cycles under flexural fatigue loading. According to the framework, a new method calculating the reliability was proposed through probability simulation in order to describe a random accumulation of fatigue damage, which combines reliability theory, one-to-one probability density functions transformation technique, cumulative fatigue damage theory and Weibull distribution theory. Then the statistical analysis of flexural fatigue performance of cement concrete tested was carried out utilizing Weibull distribution. Ultimately, the reliability for the tested cement concrete was obtained by the proposed method. Results indicate that the stochastic evolution behavior of concrete materials under fatigue loading can be captured by the established framework. The flexural fatigue life data of concrete at different stress levels is well described utilizing the two-parameter Weibull distribution. The evolution of reliability for concrete materials tested in this study develops by three stages and may corresponds to develop stages of cracking. The proposed method may also be available for the analysis of degradation behaviors under non-fatigue conditions.

Determination of horizontal two-phase flow patterns based on statistical analysis of instantaneous pressure drop at an orifice (오리피스 순간압력강하의 통계해석을 통한 수평 2상유동양식의 결정)

  • 이상천;이정표;김중엽
    • Transactions of the Korean Society of Mechanical Engineers
    • /
    • v.11 no.5
    • /
    • pp.810-818
    • /
    • 1987
  • A new method is proposed to identify two-phase flow regimes in horizontal gas-liquid flow, based upon a statistical analysis of instantaneous pressure drop curves at an orifice. The probability density functions of the curves indicate distinct patterns depending upon the two-phase flow regime. The transition region also could be identified by the distribution shape of the probability density function. The statistical properties of the pressure drop are analyzed for various flow regimes and transitions. Finally, the data of flow patterns determined by the proposed method are compared with the flow pattern maps suggested by other investigators.

Modeling of The Learning-Curve Effects on Count Responses (개수형 자료에 대한 학습곡선효과의 모형화)

  • Choi, Minji;Park, Man Sik
    • The Korean Journal of Applied Statistics
    • /
    • v.27 no.3
    • /
    • pp.445-459
    • /
    • 2014
  • As a certain job is repeatedly done by a worker, the outcome comparative to the effort to complete the job gets more remarkable. The outcome may be the time required and fraction defective. This phenomenon is referred to a learning-curve effect. We focus on the parametric modeling of the learning-curve effects on count data using a logistic cumulative distribution function and some probability mass functions such as a Poisson and negative binomial. We conduct various simulation scenarios to clarify the characteristics of the proposed model. We also consider a real application to compare the two discrete-type distribution functions.

Nonstationary Frequency Analysis of Hydrologic Extreme Variables Considering of Seasonality and Trend (계절성과 경향성을 고려한 극치수문자료의 비정상성 빈도해석)

  • Lee, Jeong-Ju;Kwon, Hyun-Han;Moon, Young-Il
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2010.05a
    • /
    • pp.581-585
    • /
    • 2010
  • This study introduced a Bayesian based frequency analysis in which the statistical trend seasonal analysis for hydrologic extreme series is incorporated. The proposed model employed Gumbel and GEV extreme distribution to characterize extreme events and a fully coupled bayesian frequency model was finally utilized to estimate design rainfalls in Seoul. Posterior distributions of the model parameters in both trend and seasonal analysis were updated through Markov Chain Monte Carlo Simulation mainly utilizing Gibbs sampler. This study proposed a way to make use of nonstationary frequency model for dynamic risk analysis, and showed an increase of hydrologic risk with time varying probability density functions. In addition, full annual cycle of the design rainfall through seasonal model could be applied to annual control such as dam operation, flood control, irrigation water management, and so on. The proposed study showed advantage in assessing statistical significance of parameters associated with trend analysis through statistical inference utilizing derived posterior distributions.

  • PDF

Statistical comparison of morphological dilation with its equivalent linear shift-invariant system:case of memoryless uniform soruces (무기억 균일 신호원에 대한 수리 형태론적인 불림과 등가 시스템의 통계적 비교)

  • 김주명;최상신;최태영
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.34S no.2
    • /
    • pp.79-93
    • /
    • 1997
  • This paper presents a linear shift-invariant system euqivalent to morphological dilation for a memoryless uniform source in the sense of the power spectral density function, and comares it with dialtion. This equivalent LSI system is found through spectral decomposition and, for dilation and with windwo size L, it is shown to be a finite impulse response filter composed of L-1 delays, L multipliers and three adders. Th ecoefficients of the equivalent systems are tabulated. The comparisons of dilation and its equivalent LSI system show that probability density functions of the output sequences of the two systems are quite different. In particular, the probability density functon from dilation of an independent and identically distributed uniform source over the unit interval (0, 1) shows heavy probability in around 1, while that from the equivalent LSI system shows probability concentration around themean vlaue and symmetricity about it. This difference is due to the fact that dilation is a non-linear process while the equivalent system is linear and shift-ivariant. In the case that dikation is fabored over LSI filters in subjective perforance tests, one of the factors can be traced to this difference in the probability distribution.

  • PDF

Statistical Characteristics and Stochastic Modeling of Water Quality Data at the Influent of Daejeon Wastewater Treatment Plant (대전시 공공하수처리시설 유입수 수질자료의 통계적 특성 및 추계학적 모의)

  • Pak, Gijung;Jung, Minjae;Lee, Hansaem;Kim, Deokwoo;Yoon, Jaeyong;Paik, Kyungrock
    • Journal of Korean Society on Water Environment
    • /
    • v.28 no.1
    • /
    • pp.38-49
    • /
    • 2012
  • In this study, we analyze statistical characteristics of influent water quality in Daejeon waste water treatment plant and apply a stochastic model for data generation. In the analysis, the influent water quality data from year 2003 to 2008, except for year 2006, are used. Among water quality variables, we find strong correlations between BOD and T-N; T-N and T-P; BOD and T-P; $COD_{Mn}$ and T-P; and BOD and $COD_{Mn}$. We also find that different water quality variables follow different theoretical probability distribution functions, which also depends on whether the seasonal cycle is removed. Finally, we generate the influent water quality data using the multi-season 1st Markov model (Thomas-Fiering model). With model parameters calibrated for the period 2003~2005, the generated data for 2007~2008 are well compared with observed data showing good agreement in general. BOD and T-N are underestimated by the stochastic model. This is mainly due to the statistical difference in observed data itself between two periods of 2003~2005 and 2007~2008. Therefore, we expect the stochastic model can be applied with more confidence in the case that the data follows stationary pattern.