• Title/Summary/Keyword: Poisson의 가설

Search Result 11, Processing Time 0.018 seconds

On the Extension of Test Statistics for Detecting Negative Binomial Departures from the Poisson Assumption (포아송으로부터 부의 이항분포로의 이탈에 대한 검정통계량의 확장)

  • 이선호
    • Journal of the Korean Statistical Society
    • /
    • v.22 no.2
    • /
    • pp.171-190
    • /
    • 1993
  • 포아송분포로부터 부의 이항분포로의 이탈을 검색하는 통계량들이 자료의 형태에 따라 여러가지 제시되었다. 그런데 대립가설인 부의 이항분포의 모수화 방법에 따라 분산과 평균의 구조가 변하고 국소 최적 검정 통계량도 달라진다는 것이 알려졌다. 본 논문에서는 대립가설을 일반적인 포아송 혼합분포로까지 확장시키고, 일반적인 형태의 분산과 평균의 구조에도 검정 가능한 새로운 통계량 L을 소개하고 있다. 또한 L 통계량은 포아송 분포로부터 부의 이항분포로의 이탈을 다루는 기존의 여러 통계량들의 일반화된 형태임을 보였다. 점근적 상대효율과 모의 실험을 통하여 L 통계량과 기존의 통계량들을 비교한 결과 분산과 평균사이의 구조에 상관없이 L 통계량이 우수한 것임을 입증하였다.

  • PDF

Likelihood Ratio Test for the Epidemic Alternatives on the Zero-Inflated Poisson Model (변화시점이 있는 영과잉-포아송모형에서 돌출대립가설에 대한 우도비검정)

  • Kim, Kyung-Moo
    • Journal of the Korean Data and Information Science Society
    • /
    • v.9 no.2
    • /
    • pp.247-253
    • /
    • 1998
  • In ease of the epidemic Zero-Inflated Poisson model, likelihood ratio test was used for testing epidemic alternatives. Epidemic changepoints were estimated by the method of least squares. It were used for starting points to estimate the maximum likelihood estimators. And several parameters were compared through the Monte Carlo simulations. As a result, maximum likelihood estimators for the epidemic chaagepoints and several parameters are better than the least squares and moment estimators.

  • PDF

Zero-Inflated Poisson Model with a Change-point (변화시점이 있는 영과잉-포아송모형)

  • Kim, Kyung-Moo
    • Journal of the Korean Data and Information Science Society
    • /
    • v.9 no.1
    • /
    • pp.1-9
    • /
    • 1998
  • In case of Zero-Inflated Poisson model with a change-point, likelihood ratio test statistic was used for testing hypothesis for a change-point. A change-point and several interesting parameters were estimated by using the method of moments and maximum likelihood. In order to compare the estimators, empirical mean-square-error was used. Real data for the Zero-Inflated Poisson model with a change-point and Poisson model without a change-point were examined.

  • PDF

The Effects of Dispersion Parameters and Test for Equality of Dispersion Parameters in Zero-Truncated Bivariate Generalized Poisson Models (제로절단된 이변량 일반화 포아송 분포에서 산포모수의 효과 및 산포의 동일성에 대한 검정)

  • Lee, Dong-Hee;Jung, Byoung-Cheol
    • The Korean Journal of Applied Statistics
    • /
    • v.23 no.3
    • /
    • pp.585-594
    • /
    • 2010
  • This study, investigates the effects of dispersion parameters between two response variables in zero-truncated bivariate generalized Poisson distributions. A Monte Carlo study shows that the zero-truncated bivariate Poisson and negative binomial models fit poorly wherein the zero-truncated bivariate count data has heterogeneous dispersion parameters on dependent variables. In addition, we derive the score test for testing the equality of the dispersion parameters and compare its efficiency with the likelihood ratio test.

Analysis of Three-Dimensional Rigid-Body Collisions with Friction -CoIlisions between EIlipsoids- (마찰력이 개재된 3차원 강체충돌 해석 - 타원체간 충돌 -)

  • Han, In-Hwan;Jo, Jeong-Ho
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.20 no.5
    • /
    • pp.1486-1497
    • /
    • 1996
  • The problem of determining the 3-demensional motion of any two rough bodies after a collision involves some rather long analysis and yet in some points it differs essentially from the corresponding problem in tdwo dimensions. We consider a special problem where two rough ellipsolids moving in any manner collide, and analyze the three dimensional impact process with Coulomb friction and Poisson's hypothesis. The differential equations that describe that process of the impact induce a flow in the tangent velocity space, the flow patterns characterize the possible impact cases. By using the graphic method in impulse space and numerical integration thchnique, we analyzed the impact process inall the possible cases and presented the algorithm for determining the post-impact motion. The principles could be applied to the general problem in three dimensions. We verified the effectiveness of the analysis results by simulating the numerous significant examples.

Fuzzy Hypothesis Test by Poisson Test for Most Powerful Test (최강력 검정을 위한 퍼지 포아송 가설의 검정)

  • Kang, Man-Ki;Seo, Hyun-A
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.6
    • /
    • pp.809-813
    • /
    • 2009
  • We want to show that the construct of best fuzzy tests for certain fuzzy situations of Poisson distribution. Due to Neyman and Pearson theorem, if we have ${\theta}_0$ and ${\theta}_1$ be distinct fuzzy values of ${\Omega}=\{{\theta}\;:\;{\theta}\;=\;{\theta}_0,\;{\theta}_1\}$ such that $L({\theta}_0\;:\;X)/L({\theta}_1\;:\;X)$ < k, then k is a fuzzy number. For each fuzzy random samples point $X\;{\subset}\;C$, we have most power test for fuzzy critical region C by agreement index.

Overdispersion in count data - a review (가산자료(count data)의 과산포 검색: 일반화 과정)

  • 김병수;오경주;박철용
    • The Korean Journal of Applied Statistics
    • /
    • v.8 no.2
    • /
    • pp.147-161
    • /
    • 1995
  • The primary objective of this paper is to review parametric models and test statistics related to overdspersion of count data. Poisson or binomial assumption often fails to explain overdispersion. We reviewed real examples of overdispersion in count data that occurred in toxicological or teratological experiments. We also reviewed several models that were suggested for implementing experiments. We also reviewed several models that were suggested for implementing the extra-binomial variation or hyper-Poisson variability, and we noted how these models were generalized and further developed. The approaches that have been suggested for the overdispersion fall into two broad categories. The one is to develop a parametric model for it, and the other is to assume a particular relationship between the variance and the mean of the response variable and to derive a score test staistics for detecting the overdispersion. Recently, Dean(1992) derived a general score test statistics for detecting overdispersion from the exponential family.

  • PDF

Text Filtering using Iterative Boosting Algorithms (반복적 부스팅 학습을 이용한 문서 여과)

  • Hahn, Sang-Youn;Zang, Byoung-Tak
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.4
    • /
    • pp.270-277
    • /
    • 2002
  • Text filtering is a task of deciding whether a document has relevance to a specified topic. As Internet and Web becomes wide-spread and the number of documents delivered by e-mail explosively grows the importance of text filtering increases as well. The aim of this paper is to improve the accuracy of text filtering systems by using machine learning techniques. We apply AdaBoost algorithms to the filtering task. An AdaBoost algorithm generates and combines a series of simple hypotheses. Each of the hypotheses decides the relevance of a document to a topic on the basis of whether or not the document includes a certain word. We begin with an existing AdaBoost algorithm which uses weak hypotheses with their output of 1 or -1. Then we extend the algorithm to use weak hypotheses with real-valued outputs which was proposed recently to improve error reduction rates and final filtering performance. Next, we attempt to achieve further improvement in the AdaBoost's performance by first setting weights randomly according to the continuous Poisson distribution, executing AdaBoost, repeating these steps several times, and then combining all the hypotheses learned. This has the effect of mitigating the ovefitting problem which may occur when learning from a small number of data. Experiments have been performed on the real document collections used in TREC-8, a well-established text retrieval contest. This dataset includes Financial Times articles from 1992 to 1994. The experimental results show that AdaBoost with real-valued hypotheses outperforms AdaBoost with binary-valued hypotheses, and that AdaBoost iterated with random weights further improves filtering accuracy. Comparison results of all the participants of the TREC-8 filtering task are also provided.

Testing for Overdispersion in a Bivariate Negative Binomial Distribution Using Bootstrap Method (이변량 음이항 모형에서 붓스트랩 방법을 이용한 과대산포에 대한 검정)

  • Jhun, Myoung-Shic;Jung, Byoung-Cheol
    • The Korean Journal of Applied Statistics
    • /
    • v.21 no.2
    • /
    • pp.341-353
    • /
    • 2008
  • The bootstrap method for the score test statistic is proposed in a bivariate negative binomial distribution. The Monte Carlo study shows that the score test for testing overdispersion underestimates the nominal significance level, while the score test for "intrinsic correlation" overestimates the nominal one. To overcome this problem, we propose a bootstrap method for the score test. We find that bootstrap methods keep the significance level close to the nominal significance level for testing the hypothesis. An empirical example is provided to illustrate the results.

Impact of Heterogeneous Dispersion Parameter on the Expected Crash Frequency (이질적 과분산계수가 기대 교통사고건수 추정에 미치는 영향)

  • Shin, Kangwon
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.15 no.9
    • /
    • pp.5585-5593
    • /
    • 2014
  • This study tested the hypothesis that the significance of the heterogeneous dispersion parameter in safety performance function (SPF) used to estimate the expected crashes is affected by the endogenous heterogeneous prior distributions, and analyzed the impacts of the mis-specified dispersion parameter on the evaluation results for traffic safety countermeasures. In particular, this study simulated the Poisson means based on the heterogeneous dispersion parameters and estimated the SPFs using both the negative binomial (NB) model and the heterogeneous negative binomial (HNB) model for analyzing the impacts of the model mis-specification on the mean and dispersion functions in SPF. In addition, this study analyzed the characteristics of errors in the crash reduction factors (CRFs) obtained when the two models are used to estimate the posterior means and variances, which are essentially estimated through the estimated hyper-parameters in the heterogeneous prior distributions. The simulation study results showed that a mis-estimation on the heterogeneous dispersion parameters through the NB model does not affect the coefficient of the mean functions, but the variances of the prior distribution are seriously mis-estimated when the NB model is used to develop SPFs without considering the heterogeneity in dispersion. Consequently, when the NB model is used erroneously to estimate the prior distributions with heterogeneous dispersion parameters, the mis-estimated posterior mean can produce large errors in CRFs up to 120%.