• Title/Summary/Keyword: 이항 모수

Search Result 49, Processing Time 0.019 seconds

Evaluation for usefulness of Chukwookee Data in Rainfall Frequency Analysis (강우빈도해석에서의 측우기자료의 유용성 평가)

  • Kim, Kee-Wook;Yoo, Chul-Sang;Park, Min-Kyu;Kim, Hyeon-Jun
    • Journal of Korea Water Resources Association
    • /
    • v.40 no.11
    • /
    • pp.851-859
    • /
    • 2007
  • In this study, the chukwookee data were evaluated by applying that for the historical rainfall frequency analysis. To derive a two parameter log-normal distribution by using historical data and modem data, censored data MLE and binomial censored data MLE were applied. As a result, we found that both average and standard deviation were all estimated smaller with chukwookee data then those with only modern data. This indicates that rather big events rarely happens during the period of chukwookee data then during the modern period. The frequency analysis results using the parameters estimated were also similar to those expected. The point to be noticed is that the rainfall quantiles estimated by both methods were similar. This result indicates that the historical document records like the annals of Chosun dynasty could be valuable and effective for the frequency analysis. This also means the extension of data available for frequency analysis.

Determination of Sample Sizes of Bivariate Efficacy and Safety Outcomes (이변량 효능과 안전성 이항변수의 표본수 결정방법)

  • Lee, Hyun-Hak;Song, Hae-Hiang
    • The Korean Journal of Applied Statistics
    • /
    • v.22 no.2
    • /
    • pp.341-353
    • /
    • 2009
  • We consider sample-size determination problem motivated by comparative clinical trials where patient outcomes are characterized by a bivariate outcome of efficacy and safety. Thall and Cheng (1999) presented a sample size methodology for the case of bivariate binary outcomes. We propose a bivariate Wilcoxon-Mann-Whitney(WMW) statistics for sample-size determination for binary outcomes, and this nonparametric method can be equally used to determine sample sizes of ordinal outcomes. The two methods of sample size determination rely on the same testing strategy for the target parameters but differs in the test statistics, an asymptotic bivariate normal statistic of the transformed proportions in Thall and Cheng (1999) and nonparametric bivariate WMW statistic in the other method. Sample sizes are calculated for the two experimental oncology trials, described in Thall and Cheng (1999), and for the first trial example the sample sizes of a bivariate WMW statistic are smaller than those of Thall and Cheng (1999), while for the second trial example the reverse is true.

A simulation study for the approximate confidence intervals of hypergeometric parameter by using actual coverage probability (실제포함확률을 이용한 초기하분포 모수의 근사신뢰구간 추정에 관한 모의실험 연구)

  • Kim, Dae-Hak
    • Journal of the Korean Data and Information Science Society
    • /
    • v.22 no.6
    • /
    • pp.1175-1182
    • /
    • 2011
  • In this paper, properties of exact confidence interval and some approximate confidence intervals of hyper-geometric parameter, that is the probability of success p in the population is discussed. Usually, binomial distribution is a well known discrete distribution with abundant usage. Hypergeometric distribution frequently replaces a binomial distribution when it is desirable to make allowance for the finiteness of the population size. For example, an application of the hypergeometric distribution arises in describing a probability model for the number of children attacked by an infectious disease, when a fixed number of them are exposed to it. Exact confidence interval estimation of hypergeometric parameter is reviewed. We consider the approximation of hypergeometirc distribution to the binomial and normal distribution respectively. Approximate confidence intervals based on these approximation are also adequately discussed. The performance of exact confidence interval estimates and approximate confidence intervals of hypergeometric parameter is compared in terms of actual coverage probability by small sample Monte Carlo simulation.

Comparing the efficiency of dispersion parameter estimators in gamma generalized linear models (감마 일반화 선형 모형에서의 산포 모수 추정량에 대한 효율성 연구)

  • Jo, Seongil;Lee, Woojoo
    • The Korean Journal of Applied Statistics
    • /
    • v.30 no.1
    • /
    • pp.95-102
    • /
    • 2017
  • Gamma generalized linear models have received less attention than Poisson and binomial generalized linear models. Therefore, many old-established statistical techniques are still used in gamma generalized linear models. In particular, existing literature and textbooks still use approximate estimates for the dispersion parameter. In this paper we study the efficiency of various dispersion parameter estimators in gamma generalized linear models and perform numerical simulations. Numerical studies show that the maximum likelihood estimator and Cox-Reid adjusted maximum likelihood estimator are recommended and that approximate estimates should be avoided in practice.

Adjustments of dispersion statistics in extended quasi-likelihood models (준우도 함수의 분산치 교정)

  • 김충락;서한손
    • The Korean Journal of Applied Statistics
    • /
    • v.6 no.1
    • /
    • pp.41-52
    • /
    • 1993
  • In this paper we study numerical behavior of the adjustments for the variances of the pearson and deviance type dispersion statistics in two overdispersed mixture models; negative binomial and beta-binomial distribution. They are important families of an extended quasi-likelihood model which is very useful for the joint modelling of mean and dispersion. Comparisons are done for two types of dispersion statistics for various mean and dispersion parameters by simulation studies.

  • PDF

Posterior density estimation of Kappa via Gibbs sampler in the beta-binomial model (베타-이항 분포에서 Gibbs sampler를 이용한 평가 일치도의 사후 분포 추정)

  • 엄종석;최일수;안윤기
    • The Korean Journal of Applied Statistics
    • /
    • v.7 no.2
    • /
    • pp.9-19
    • /
    • 1994
  • Beta-binomial model, which is reparametrized in terms of the mean probability $\mu$ of a positive deagnosis and the $\kappa$ of agreement, is widely used in psychology. When $\mu$ is close to 0, inference about $\kappa$ become difficult because likelihood function becomes constant. We consider Bayesian approach in this case. To apply Bayesian analysis, Gibbs sampler is used to overcome difficulties in integration. Marginal posterior density functions are estimated and Bayesian estimates are derived by using Gibbs sampler and compare the results with the one obtained by using numerical integration.

  • PDF

Statistical Modeling of Learning Curves with Binary Response Data (이항 반응 자료에 대한 학습곡선의 모형화)

  • Lee, Seul-Ji;Park, Man-Sik
    • Communications for Statistical Applications and Methods
    • /
    • v.19 no.3
    • /
    • pp.433-450
    • /
    • 2012
  • As a worker performs a certain operation repeatedly, he tends to become familiar with the job and complete it in a very short time. That means that the efficiency is improved due to his accumulated knowledge, experience and skill in regards to the operation. Investing time in an output is reduced by repeating any operation. This phenomenon is referred to as the learning curve effect. A learning curve is a graphical representation of the changing rate of learning. According to previous literature, learning curve effects are determined by subjective pre-assigned factors. In this study, we propose a new statistical model to clarify the learning curve effect by means of a basic cumulative distribution function. This work mainly focuses on the statistical modeling of binary data. We employ the Newton-Raphson method for the estimation and Delta method for the construction of confidence intervals. We also perform a real data analysis.

A Development of Traffic Accident Model by Random Parameter : Focus on Capital Area and Busan 4-legs Signalized Intersections (확률모수를 이용한 교통사고예측모형 개발 -수도권 및 부산광역시 4지 교차로를 대상으로-)

  • Lee, Geun-Hee;Rho, Jeong-Hyun
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.14 no.6
    • /
    • pp.91-99
    • /
    • 2015
  • This study intends to build a traffic accident predictive model considering road geometrics, traffic and enviromental characteristics and identify the relationship of 4-legs intersection accidents in Seoul and Busan metropolitan area. The RPNB(Random Parameter Negative Binomial) model shows improvement over the fixed NB(Negative Binomial) and out of 53 variables, 10 variables (main road number of lane, main road vehicle traffic volume(left), minor road vehicle traffic volume(right), main road drive restriction, minor road sight distance, minor road median strip, minor road speed limit, minor road speed restriction) showed to have significant variables affecting traffic accident occurrences in 4-legs signilized intersections. Also, among 10 significant variables, 2 variables(minor road sight distance, minor road speed restriction) found to be random parameters.

CUSUM charts for monitoring type I right-censored lognormal lifetime data (제1형 우측중도절단된 로그정규 수명 자료를 모니터링하는 누적합 관리도)

  • Choi, Minjae;Lee, Jaeheon
    • The Korean Journal of Applied Statistics
    • /
    • v.34 no.5
    • /
    • pp.735-744
    • /
    • 2021
  • Maintaining the lifetime of a product is one of the objectives of quality control. In real processes, most samples are constructed with censored data because, in many situations, we cannot measure the lifetime of all samples due to time or cost problems. In this paper, we propose two cumulative sum (CUSUM) control charting procedures to monitor the mean of type I right-censored lognormal lifetime data. One of them is based on the likelihood ratio, and the other is based on the binomial distribution. Through simulations, we evaluate the performance of the two proposed procedures by comparing the average run length (ARL). The overall performance of the likelihood ratio CUSUM chart is better, especially this chart performs better when the censoring rate is low and the shape parameter value is small. Conversely, the binomial CUSUM chart is shown to perform better when the censoring rate is high, the shape parameter value is large, and the change in the mean is small.

Parameter estimation for the imbalanced credit scoring data using AUC maximization (AUC 최적화를 이용한 낮은 부도율 자료의 모수추정)

  • Hong, C.S.;Won, C.H.
    • The Korean Journal of Applied Statistics
    • /
    • v.29 no.2
    • /
    • pp.309-319
    • /
    • 2016
  • For binary classification models, we consider a risk score that is a function of linear scores and estimate the coefficients of the linear scores. There are two estimation methods: one is to obtain MLEs using logistic models and the other is to estimate by maximizing AUC. AUC approach estimates are better than MLEs when using logistic models under a general situation which does not support logistic assumptions. This paper considers imbalanced data that contains a smaller number of observations in the default class than those in the non-default for credit assessment models; consequently, the AUC approach is applied to imbalanced data. Various logit link functions are used as a link function to generate imbalanced data. It is found that predicted coefficients obtained by the AUC approach are equivalent to (or better) than those from logistic models for low default probability - imbalanced data.