• Title/Summary/Keyword: Conditional likelihood

Search Result 90, Processing Time 0.021 seconds

Investigation on Exact Tests (정확검정들에 대한 고찰)

  • 강승호
    • The Korean Journal of Applied Statistics
    • /
    • v.15 no.1
    • /
    • pp.187-199
    • /
    • 2002
  • When the sample size is small, exact tests are often employed because the asymptotic distribution of the test statistic is in doubt. The advantage of exact tests is that it is guaranteed to bound the type I error probability to the nominal level. In this paper we review the methods of constructing exact tests, the algorithm and commercial software. We also examine the difference between exact p-values obtained from exact tests and true p-values obtained from the true underlying distribution.

Bayesian Approach for Software Reliability Models (소프트웨어 신뢰모형에 대한 베이지안 접근)

  • Choi, Ki-Heon
    • Journal of the Korean Data and Information Science Society
    • /
    • v.10 no.1
    • /
    • pp.119-133
    • /
    • 1999
  • A Markov Chain Monte Carlo method is developed to compute the software reliability model. We consider computation problem for determining of posterior distibution in Bayseian inference. Metropolis algorithms along with Gibbs sampling are proposed to preform the Bayesian inference of the Mixed model with record value statistics. For model determiniation, we explored the prequential conditional predictive ordinate criterion that selects the best model with the largest posterior likelihood among models using all possible subsets of the component intensity functions. To relax the monotonic intensity function assumptions. A numerical example with simulated data set is given.

  • PDF

ECM and GLR Based Multiuser Detection with I-CSI

  • Maio Antonio De;Episcopo Roberto;Lops Marco
    • Journal of Communications and Networks
    • /
    • v.7 no.1
    • /
    • pp.29-35
    • /
    • 2005
  • This paper deals with the problem of multiuser detection over a direct-sequence code-division multiple access (DS-CDMA) channel with incomplete channel state informations (I-CSI). We devise and assess two novel recursive detectors based on the expectation conditional maximization (ECM) algorithm and the generalized likelihood ratio (GLR) principle, respectively. Both receivers entail an affordable computational complexity. Moreover, the performance assessment, conducted via Monte Carlo techniques, shows that they achieve satisfactory performance levels and outperform linear detectors.

Hierarchical Bayesian Network Learning for Large-scale Data Analysis (대규모 데이터 분석을 위한 계층적 베이지안망 학습)

  • Hwang Kyu-Baek;Kim Byoung-Hee;Zhang Byoung-Tak
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2005.07b
    • /
    • pp.724-726
    • /
    • 2005
  • 베이지안망(Bayesian network)은 다수의 변수들 사이의 확률적 관계(조건부독립성: conditional independence)를 그래프 구조로 표현하는 모델이다. 이러한 베이지안망은 비감독학습(unsupervised teaming)을 통한 데이터마이닝에 적합하다. 이를 위해 데이터로부터 베이지안망의 구조와 파라미터를 학습하게 된다. 주어진 데이터의 likelihood를 최대로 하는 베이지안망 구조를 찾는 문제는 NP-hard임이 알려져 있으므로, greedy search를 통한 근사해(approximate solution)를 구하는 방법이 주로 이용된다. 하지만 이러한 근사적 학습방법들도 데이터를 구성하는 변수들이 수천 - 수만에 이르는 경우, 방대한 계산량으로 인해 그 적용이 실질적으로 불가능하게 된다. 본 논문에서는 그러한 대규모 데이터에서 학습될 수 있는 계층적 베이지안망(hierarchical Bayesian network) 모델 및 그 학습방법을 제안하고, 그 가능성을 실험을 통해 보인다.

  • PDF

Influence diagnostics for skew-t censored linear regression models

  • Marcos S Oliveira;Daniela CR Oliveira;Victor H Lachos
    • Communications for Statistical Applications and Methods
    • /
    • v.30 no.6
    • /
    • pp.605-629
    • /
    • 2023
  • This paper proposes some diagnostics procedures for the skew-t linear regression model with censored response. The skew-t distribution is an attractive family of asymmetrical heavy-tailed densities that includes the normal, skew-normal and student's-t distributions as special cases. Inspired by the power and wide applicability of the EM-type algorithm, local and global influence analysis, based on the conditional expectation of the complete-data log-likelihood function are developed, following Zhu and Lee's approach. For the local influence analysis, four specific perturbation schemes are discussed. Two real data sets, from education and economics, which are right and left censoring, respectively, are analyzed in order to illustrate the usefulness of the proposed methodology.

Predicting depth value of the future depth-based multivariate record

  • Samaneh Tata;Mohammad Reza Faridrohani
    • Communications for Statistical Applications and Methods
    • /
    • v.30 no.5
    • /
    • pp.453-465
    • /
    • 2023
  • The prediction problem of univariate records, though not addressed in multivariate records, has been discussed by many authors based on records values. There are various definitions for multivariate records among which depth-based records have been selected for the aim of this paper. In this paper, by means of the maximum likelihood and conditional median methods, point and interval predictions of depth values which are related to the future depth-based multivariate records are considered on the basis of the observed ones. The observations derived from some elements of the elliptical distributions are the main reason of studying this problem. Finally, the satisfactory performance of the prediction methods is illustrated via some simulation studies and a real dataset about Kermanshah city drought.

Non-stationary Frequency Analysis with Climate Variability using Conditional Generalized Extreme Value Distribution (기후변동을 고려한 조건부 GEV 분포를 이용한 비정상성 빈도분석)

  • Kim, Byung-Sik;Lee, Jung-Ki;Kim, Hung-Soo;Lee, Jin-Won
    • Journal of Wetlands Research
    • /
    • v.13 no.3
    • /
    • pp.499-514
    • /
    • 2011
  • An underlying assumption of traditional hydrologic frequency analysis is that climate, and hence the frequency of hydrologic events, is stationary, or unchanging over time. Under stationary conditions, the distribution of the variable of interest is invariant to temporal translation. Water resources infrastructure planning and design, such as dams, levees, canals, bridges, and culverts, relies on an understanding of past conditions and projection of future conditions. But, Water managers have always known our world is inherently non-stationary, and they routinely deal with this in management and planning. The aim of this paper is to give a brief introduction to non-stationary extreme value analysis methods. In this paper, a non-stationary hydrologic frequency analysis approach is introduced in order to determine probability rainfall consider changing climate. The non-stationary statistical approach is based on the conditional Generalized Extreme Value(GEV) distribution and Maximum Likelihood parameter estimation. This method are applied to the annual maximum 24 hours-rainfall. The results show that the non-stationary GEV approach is suitable for determining probability rainfall for changing climate, sucha sa trend, Moreover, Non-stationary frequency analyzed using SOI(Southern Oscillation Index) of ENSO(El Nino Southern Oscillation).

An overview of Hawkes processes and their applications (혹스 과정의 개요 및 응용)

  • Mijeong Kim
    • The Korean Journal of Applied Statistics
    • /
    • v.36 no.4
    • /
    • pp.309-322
    • /
    • 2023
  • The Hawkes process is a point process with self-exciting characteristics. It has been mainly used to describe seismic phenomena in which aftershocks occur due to the main earthquake. Recently, it has been used to explain various phenomena with self-exciting properties, such as the spread of infectious diseases and the spread of news on SNS. The Hawkes process can be flexibly modified according to the characteristics of events by using various types of excitation functions. Since it is difficult to implement a maximum likelihood estimator numerically, estimation methods have been improved until recently. In this paper, the conditional intensity function and excitation function are explained to describe the Hawkes process. Then, existing examples of Hawkes processes used in seismic, epidemiological, criminal, and financial fields are described and estimation methods are introduced. I analyze earthquakes that occurred in gyeongsang-do, Korea from November 2017 to December 2022, using R package ETAS.

Developing statistical models and constructing clinical systems for analyzing semi-competing risks data produced from medicine, public heath, and epidemiology (의료, 보건, 역학 분야에서 생산되는 준경쟁적 위험자료를 분석하기 위한 통계적 모형의 개발과 임상분석시스템 구축을 위한 연구)

  • Kim, Jinheum
    • The Korean Journal of Applied Statistics
    • /
    • v.33 no.4
    • /
    • pp.379-393
    • /
    • 2020
  • A terminal event such as death may censor an intermediate event such as relapse, but not vice versa in semi-competing risks data, which is often seen in medicine, public health, and epidemiology. We propose a Weibull regression model with a normal frailty to analyze semi-competing risks data when all three transition times of the illness-death model are possibly interval-censored. We construct the conditional likelihood separately depending on the types of subjects: still alive with or without the intermediate event, dead with or without the intermediate event, and dead with the intermediate event missing. Optimal parameter estimates are obtained from the iterative quasi-Newton algorithm after the marginalization of the full likelihood using the adaptive importance sampling. We illustrate the proposed method with extensive simulation studies and PAQUID (Personnes Agées Quid) data.

A Monte Carlo Comparison of the Small Sample Behavior of Disparity Measures (소표본에서 차이측도 통계량의 비교연구)

  • 홍종선;정동빈;박용석
    • The Korean Journal of Applied Statistics
    • /
    • v.16 no.2
    • /
    • pp.455-467
    • /
    • 2003
  • There has been a long debate on the applicability of the chi-square approximation to statistics based on small sample size. Extending comparison results among Pearson chi-square Χ$^2$, generalized likelihood .ratio G$^2$, and the power divergence Ι(2/3) statistics suggested by Rudas(1986), recently developed disparity statistics (BWHD(1/9), BWCS(1/3), NED(4/3)) we compared and analyzed in this paper. By Monte Carlo studies about the independence model of two dimension contingency tables, the conditional model and one variable independence model of three dimensional tables, simulated 90 and 95 percentage points and approximate 95% confidence intervals for the true percentage points are obtained. It is found that the Χ$^2$, Ι(2/3), BWHD(1/9) test statistics have very similar behavior and there seem to be applcable for small sample sizes than others.