• Title/Summary/Keyword: 모수적 분포

Search Result 442, Processing Time 0.028 seconds

Flood Frequency Analysis Considering Probability Distribution and Return Period under Non-stationary Condition (비정상성 확률분포 및 재현기간을 고려한 홍수빈도분석)

  • Lee, Sang-Ho;Kim, Sang Ug;Lee, Yeong Seob;Kim, Hyeong Bae
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2015.05a
    • /
    • pp.610-610
    • /
    • 2015
  • 수공구조물의 설계에서는 홍수빈도분석을 통해 산정된 특정 재현기간에서의 확률수문량이 설계기준으로 사용된다. 그러나 최근 기후변화로 인해 이상기후 현상이 심해짐에 따라 수문기상자료의 정상성을 가정하는 기존의 홍수빈도분석은 변화되는 수문현상을 적절히 표현하지 못하는 경우가 많다. 본 연구에서는 확률분포의 모수가 시간에 따라 변화하는 비정상성 빈도분석기법을 적용하였으며 확률분포의 모수들을 최우추정법으로 추정하였다. 또한, 분위수 추정과정에서도 비정상성을 고려하여 정상성 가정에서 산정된 재현기간 및 위험도와 비교분석하였다. 확률분포는 GEV 분포를 사용하여 정상성 및 비정상성 모형 4개를 구축하였다. 특히, 비정상성 모형은 위치모수만 선형 경향성을 가지는 경우, 규모모수만 선형경향성을 가지는 경우, 위치 및 규모모수가 선형경향성을 가지는 경우의 3가지로 구분하여 적용하였다. 구축된 4개의 모형 중 적합모형을 선정하기 위해 우도비 검정과 Akaike 정보기준을 사용하였으며 적합모형선정 절차를 체계적으로 구축하고 적용하여 적합모형을 선정하였다. 본 연구에서 구축된 비정상성 홍수빈도분석 기법은 우리나라의 8개 다목적댐 (충주댐, 소양강댐, 안동댐, 임하댐, 합천댐, 대청댐, 섬진강댐, 주암댐)으로부터 취득된 과거 관측 댐 유입량을 대상으로 하여 적용되었다. 우도비 검정과 Akaike 정보기준을 이용한 적합 모형 선정 결과 합천댐과 섬진강댐이 비정상성 GEV 모형에 적합한 것으로 분석되었고, 나머지 지점의 다목적댐들은 정상성 모형에 적합한 것으로 분석되었다. 합천댐과 섬진강댐의 경우 비정상성 가정에서 산정된 재현기간이 정상성 가정에서 산정된 재현기간보다 매우 작게 산정되었으며 확률수문량과 위험도는 크게 산정되었다. 적합모형으로 정상성 모형이 선정된 6개의 다목적댐 중 소양강댐은 Mann-Kendall 비모수 경향성 검정 결과 유의하지는 않지만 비교적 큰 선형경향성을 가지고 있었다. 비록 비정상성 모형이 적합모형으로 선정되지는 않았지만 소양강댐에 비정상성 모형을 가정하여 재현기간과 확률수문량, 위험도를 분석한 결과 정상성 모형 가정에서 산정한 결과와 상당한 차이가 있었다. 이와 같은 결과는 수문자료의 정상성과 비정상성을 고려한 홍수빈도분석이 향후 수공구조물의 설계에 있어서 신뢰성 있는 확률수문량을 결정하는데 도움이 될 것으로 판단된다.

  • PDF

Reliability analysis methods to one-shot device (일회용품의 신뢰성분석 방안)

  • Baik, Jaiwook
    • Industry Promotion Research
    • /
    • v.7 no.4
    • /
    • pp.1-8
    • /
    • 2022
  • There are many one-shot devices that are used once and thrown away. One-shot devices such as firecrackers and ammunition are typical, and they are stored for a while after manufacture and then disposed of after use when necessary. However, unlike general operating systems, these one-shot devices have not been properly evaluated. This study first examines what the government does to secure reliability in the case of ammunition through ammunition stockpile reliability program. Next, in terms of statistical analysis, we show what the reliability analysis methods are for one-shot devices such as ammunition. Specifically, we show that it is possible to know the level of reliability if sampling inspection plan such as KS Q 0001 which is acceptance sampling plan by attributes is used. Next, non-parametric and parametric methods are introduced as ways to determine the storage reliability of ammunition. Among non-parametric methods, Kaplan-Meier method can be used since it can also handle censored data. Among parametric methods, Weibull distribution can be used to determine the storage reliability of ammunition.

Power Comparison between Methods of Empirical Process and a Kernel Density Estimator for the Test of Distribution Change (분포변화 검정에서 경험확률과정과 커널밀도함수추정량의 검정력 비교)

  • Na, Seong-Ryong;Park, Hyeon-Ah
    • Communications for Statistical Applications and Methods
    • /
    • v.18 no.2
    • /
    • pp.245-255
    • /
    • 2011
  • There are two nonparametric methods that use empirical distribution functions and probability density estimators for the test of the distribution change of data. In this paper we investigate the two methods precisely and summarize the results of previous research. We assume several probability models to make a simulation study of the change point analysis and to examine the finite sample behavior of the two methods. Empirical powers are compared to verify which is better for each model.

A Comparative Study of Parametric Methods for Significant Gene Set Identification Depending on Various Expression Metrics (유전자 발현 메트릭에 기반한 모수적 방식의 유의 유전자 집합 검출 비교 연구)

  • Kim, Jae-Young;Shin, Mi-Young
    • Journal of KIISE:Software and Applications
    • /
    • v.37 no.1
    • /
    • pp.1-8
    • /
    • 2010
  • Recently lots of attention has been paid to gene set analysis for identifying differentially expressed gene-sets between two sample groups. Unlike earlier approaches, the gene set analysis enables us to find significant gene-sets along with their functional characteristics. For this reason, various novel approaches have been suggested lately for gene set analysis. As one of such, PAGE is a parametric approach that employs average difference (AD) as an expression metric to quantify expression differences between two sample groups and assumes that the distribution of gene scores is normal. This approach is preferred to non-parametric approach because of more effective performance. However, the metric AD does not reflect either gene expression intensities or variances over samples in calculating gene scores. Thus, in this paper, we investigate the usefulness of several other expression metrics for parametric gene-set analysis, which consider actual expression intensities of genes or their expression variances over samples. For this purpose, we examined three expression metrics, WAD (weighted average difference), FC (Fisher's criterion), and Abs_SNR (Absolute value of signal-to-noise ratio) for parametric gene set analysis and evaluated their experimental results.

Selection of Probability Distribution of Pavement Life Based on Reliability Method (신뢰성 개념을 이용한 적정 포장 수명분포 선정)

  • Do, Myung-Sik;Kwon, Soo-Ahn
    • International Journal of Highway Engineering
    • /
    • v.12 no.1
    • /
    • pp.61-69
    • /
    • 2010
  • In this paper, we present the methodology about an optimal probability distribution selection as well as survival rate estimation with the national highway database from 1999 to 2008. Probability paper methods are adopted to estimate the parameters of each hazard model. The goodness-of-fit test, such as the Anderson-Darling statistics, was performed. As a result, we found that Lognormal distributionan is an appropriate distribution of newly constructed sections as well as overlayed sections. We also ascertained that the results of survival rate for pavement life between the proposed method and observed data are similar. Such a selection methodology and measures based on reliability theory can provide useful information for maintenance plans in pavement management systems as long as additional life data on pavement sections are accumulated.

Better Nonparametric Bootstrap Confidence Intervals for Capability Index $C_{pk}$ (공정능력지수 $C_{pk}$에 대한 보다 나은 비모수적 붓스트랩 신뢰구간에 관한 연구)

  • 조중재;김주성;박병선
    • The Korean Journal of Applied Statistics
    • /
    • v.12 no.1
    • /
    • pp.45-65
    • /
    • 1999
  • 공정능력지구 $C_{pk}$는 제조공정이 제품을 제대로 생산하고 있는지를 평가하기 위하여 널리 사용되고 있는 측도이다. 최근까지 공정능력지수 $C_{pk}$에 관한 추정문제들이 만히 연구되었는 바, 대부분의 이러한 연구들은 공정분포가 정규분포임을 가정하였다. 하지만 실제 품질관리 현장의 공정으로부터 얻어지는 특성치들이 정규분포를 따르지 않는 경우가 많이 발생하며, 이를 감지하기가 어려울 수 있다. 따라서 본 논문에서는 공정능력지수 $C_{pk}$에 대한 바람직한 구간추정 방법을 제안하기 위하여 6가지 형태의 비모수적인 붓스트랩 신뢰구간을 설정하고 세 가지 공정분포에 대하여 다양하고 포괄적인 모의실험을 통하여 그 효율성에 관하여 비교연구를 하였다.

  • PDF

The Study of Infinite NHPP Software Reliability Model from the Intercept Parameter using Linear Hazard Rate Distribution (선형위험률분포의 절편모수에 근거한 무한고장 NHPP 소프트웨어 신뢰모형에 관한 연구)

  • Kim, Hee-Cheul;Shin, Hyun-Cheul
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.9 no.3
    • /
    • pp.278-284
    • /
    • 2016
  • Software reliability in the software development process is an important issue. In infinite failure NHPP software reliability models, the fault occurrence rates may have constant, monotonic increasing or monotonic decreasing pattern. In this paper, infinite failures NHPP models that the situation was reflected for the fault occurs in the repair time, were presented about comparing property. Commonly, the software model of the infinite failures using the linear hazard rate distribution software reliability based on intercept parameter was used in business economics and actuarial modeling, was presented for comparison problem. The result is that a relatively large intercept parameter was appeared effectively form. The parameters estimation using maximum likelihood estimation was conducted and model selection was performed using the mean square error and the coefficient of determination. The linear hazard rate distribution model is also efficient in terms of reliability because it (the coefficient of determination is 90% or more) in the field of the conventional model can be used as an alternative model could be confirmed. From this paper, the software developers have to consider intercept parameter of life distribution by prior knowledge of the software to identify failure modes which can be able to help.

A parametric bootstrap test for comparing differentially private histograms (모수적 부트스트랩을 이용한 차등정보보호 히스토그램의 동질성 검정)

  • Son, Juhee;Park, Min-Jeong;Jung, Sungkyu
    • The Korean Journal of Applied Statistics
    • /
    • v.35 no.1
    • /
    • pp.1-17
    • /
    • 2022
  • We propose a test of consistency for two differentially private histograms using parametric bootstrap. The test can be applied when the original raw histograms are not available but only the differentially private histograms and the privacy level α are available. We also extend the test for the case where the privacy levels are different for different histograms. The resident population data of Korea and U.S in year 2020 are used to demonstrate the efficacy of the proposed test procedure. The proposed test controls the type I error rate at the nominal level and has a high power, while a conventional test procedure fails. While the differential privacy framework formally controls the risk of privacy leakage, the utility of such framework is questionable. This work also suggests that the power of a carefully designed test may be a viable measure of utility.

Sample size determination based on placements for non-inferiority trials (비열등성 시험에서 위치 방법에 기초한 표본 수 결정)

  • Kim, Jiyeon;Kim, Dongjae
    • Journal of the Korean Data and Information Science Society
    • /
    • v.24 no.6
    • /
    • pp.1349-1357
    • /
    • 2013
  • In clinical research, sample size determination is one of the most important things. There are parametric method using t-test and non-parametric method suggested by Kim and Kim (2007) based on Wilcoxon's rank sum test for determining sample size in non-inferiority trials. In this paper, we propose sample size calculation method based on placements method suggested by Orban and Wolfe (1982) and using the power calculated by Kim (1994) in non-inferiority trials. We also compare proposed sample size with that using Kim and Kim (2007)'s formula and that of t-test for parametric methods. As the result, sample size calculated by proposed method based on placements is the smallest. Therefore, proposed method based on placements is better than parametric methods in case that it's hard to assume specific distribution function for population and also more efficient in terms of time and cost than method based on Wilcoxon's rank sum test.

Development of MKDE-ebd for Estimation of Multivariate Probabilistic Distribution Functions (다변량 확률분포함수의 추정을 위한 MKDE-ebd 개발)

  • Kang, Young-Jin;Noh, Yoojeong;Lim, O-Kaung
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.32 no.1
    • /
    • pp.55-63
    • /
    • 2019
  • In engineering problems, many random variables have correlation, and the correlation of input random variables has a great influence on reliability analysis results of the mechanical systems. However, correlated variables are often treated as independent variables or modeled by specific parametric joint distributions due to difficulty in modeling joint distributions. Especially, when there are insufficient correlated data, it becomes more difficult to correctly model the joint distribution. In this study, multivariate kernel density estimation with bounded data is proposed to estimate various types of joint distributions with highly nonlinearity. Since it combines given data with bounded data, which are generated from confidence intervals of uniform distribution parameters for given data, it is less sensitive to data quality and number of data. Thus, it yields conservative statistical modeling and reliability analysis results, and its performance is verified through statistical simulation and engineering examples.