• 제목/요약/키워드: likelihood interval

검색결과 194건 처리시간 0.026초

Initial Value Selection in Applying an EM Algorithm for Recursive Models of Categorical Variables

  • Jeong, Mi-Sook;Kim, Sung-Ho;Jeong, Kwang-Mo
    • Journal of the Korean Statistical Society
    • /
    • 제27권1호
    • /
    • pp.25-55
    • /
    • 1998
  • Maximum likelihood estimates (MLEs) for recursive models of categorical variables are discussed under an EM framework. Since MLEs by EM often depend on the choice of the initial values for MLEs, we explore reasonable rules for selecting the initial values for EM. Simulation results strongly support the proposed rules.

  • PDF

베이지안 추정법을 이용한 양분선택형 조건부 가치측정모형의 분석 (Using Bayesian Estimation Technique to Analyze a Dichotomous Choice Contingent Valuation Data)

  • 유승훈
    • 자원ㆍ환경경제연구
    • /
    • 제11권1호
    • /
    • pp.99-119
    • /
    • 2002
  • As an alternative to classical maximum likelihood approach for analyzing dichotomous choice contingent valuation (DCCV) data, this paper develops a Bayesian approach. By using the idea of Gibbs sampling and data augmentation, the approach enables one to perform exact inference for DCCV models. A by-product from the approach is welfare measure, such as the mean willingness to pay, and its confidence interval, which can be used for policy analysis. The efficacy of the approach relative to the classical approach is discussed in the context of empirical DCCV studies. It is concluded that there appears to be considerable scope for the use of the Bayesian analysis in dealing with DCCV data.

  • PDF

다항 위험함수에 근거한 NHPP 소프트웨어 신뢰성장모형에 관한 연구 (A Study for NHPP software Reliability Growth Model based on polynomial hazard function)

  • 김희철
    • 디지털산업정보학회논문지
    • /
    • 제7권4호
    • /
    • pp.7-14
    • /
    • 2011
  • Infinite failure NHPP models presented in the literature exhibit either constant, monotonic increasing or monotonic decreasing failure occurrence rate per fault (hazard function). This infinite non-homogeneous Poisson process is model which reflects the possibility of introducing new faults when correcting or modifying the software. In this paper, polynomial hazard function have been proposed, which can efficiency application for software reliability. Algorithm for estimating the parameters used to maximum likelihood estimator and bisection method. Model selection based on mean square error and the coefficient of determination for the sake of efficient model were employed. In numerical example, log power time model of the existing model in this area and the polynomial hazard function model were compared using failure interval time. Because polynomial hazard function model is more efficient in terms of reliability, polynomial hazard function model as an alternative to the existing model also were able to confirm that can use in this area.

A HGLM framework for Meta-Analysis of Clinical Trials with Binary Outcomes

  • Ha, Il-Do
    • Journal of the Korean Data and Information Science Society
    • /
    • 제19권4호
    • /
    • pp.1429-1440
    • /
    • 2008
  • In a meta-analysis combining the results from different clinical trials, it is important to consider the possible heterogeneity in outcomes between trials. Such variations can be regarded as random effects. Thus, random-effect models such as HGLMs (hierarchical generalized linear models) are very useful. In this paper, we propose a HGLM framework for analyzing the binominal response data which may have variations in the odds-ratios between clinical trials. We also present the prediction intervals for random effects which are in practice useful to investigate the heterogeneity of the trial effects. The proposed method is illustrated with a real-data set on 22 trials about respiratory tract infections. We further demonstrate that an appropriate HGLM can be confirmed via model-selection criteria.

  • PDF

교량의 경험적 손상도 곡선 (Empirical Fragility Curves for Bridge)

  • 이종헌;김운학;최정호
    • 한국구조물진단유지관리공학회 논문집
    • /
    • 제6권1호
    • /
    • pp.255-262
    • /
    • 2002
  • This paper presents a statistical analysis of empirical fragility curves for bridge. The empirical fragility curves are developed utilizing bridge damage data obtained from the 1995 Hyogoken Nanbu(Kobe) earthquake. Two-parameter lognormal distribution functions are used to represent the fragility curves with the parameters estimated by the maximum likelihood method. This paper also presents methods of testing the goodness of fit of the fragility curves and estimating the confidence intervals of the two parameters(median and log-standard deviation) of the distribution. An analytical interpretation of randomness and uncertainty associated with the median is provided.

Association between Maternal Feeding Practices and Excessive Weight Gain in Infants

  • Ra, Jin Suk
    • 지역사회간호학회지
    • /
    • 제30권1호
    • /
    • pp.90-98
    • /
    • 2019
  • Purpose: The purpose of this study is to identify the association between maternal feeding practices and excessive weight gain in infants. Methods: This study adopted a cross-sectional design and included 240 pairs of mothers and their infants (129 boys and 111 girls) in public healthcare centers in the Daejeon area in South Korea. Via multivariate analyses, the association between maternal feeding practices and excessive weight gain in infants was identified. Results: Among 240 infants in this study, 39 (16.3%) infants gained excessive weight during 12 months after birth. Using multivariate logistic regression with adjustment for covariates, more than 7 months of exclusive breastfeeding was associated with a reduced likelihood of excessive weight gain in infants during the 12 months after birth (adjusted odds ratio: 0.39, 95% confidence interval: 0.02~0.81, p=.029). Conclusion: Based on these results, nurses in communities and clinics should educate mothers on the importance of longer durations of exclusive breast feeding and develop strategies for encouraging such behavior. Furthermore, support for exclusive breast feeding should be provided in various settings.

Different estimation methods for the unit inverse exponentiated weibull distribution

  • Amal S Hassan;Reem S Alharbi
    • Communications for Statistical Applications and Methods
    • /
    • 제30권2호
    • /
    • pp.191-213
    • /
    • 2023
  • Unit distributions are frequently used in probability theory and statistics to depict meaningful variables having values between zero and one. Using convenient transformation, the unit inverse exponentiated weibull (UIEW) distribution, which is equally useful for modelling data on the unit interval, is proposed in this study. Quantile function, moments, incomplete moments, uncertainty measures, stochastic ordering, and stress-strength reliability are among the statistical properties provided for this distribution. To estimate the parameters associated to the recommended distribution, well-known estimation techniques including maximum likelihood, maximum product of spacings, least squares, weighted least squares, Cramer von Mises, Anderson-Darling, and Bayesian are utilised. Using simulated data, we compare how well the various estimators perform. According to the simulated outputs, the maximum product of spacing estimates has lower values of accuracy measures than alternative estimates in majority of situations. For two real datasets, the proposed model outperforms the beta, Kumaraswamy, unit Gompartz, unit Lomax and complementary unit weibull distributions based on various comparative indicators.

Convolutional Code/Binary CPFSK 복합 전송시스템의 성능개선에 관한 연구 (Performance Improvement on the Combined Convolutional Coding and Binary CPFSK Modulation)

  • 최양호;백제인;김재균
    • 대한전자공학회논문지
    • /
    • 제23권5호
    • /
    • pp.591-596
    • /
    • 1986
  • A binary continuous phase frequency shift keying (CPFSK), whose phase is a continuous function of time and instantaneous frequency is constant, is a bandwidth efficient constant envelope signalling scheme. A transmitting signal is formed by combined coding of a convolutional encoder and a binary CPFSK modulator. The signal is transmitted throuth additive white Gaussian noise(AWGN) channel. If the received signal is detected by a coherent maximum likelihood(ML) receiver, error probability can be expressed approximately in terms of minimum Euclidean distance. We propose rate 2/4 codes for the improvement of error performance without increating the data rate per bandwidth and the receiver complexity. Its minimum Euclidean distances are compared with those of rate \ulcornercodes as a function of modulation index and observation interval.

  • PDF

Predicting depth value of the future depth-based multivariate record

  • Samaneh Tata;Mohammad Reza Faridrohani
    • Communications for Statistical Applications and Methods
    • /
    • 제30권5호
    • /
    • pp.453-465
    • /
    • 2023
  • The prediction problem of univariate records, though not addressed in multivariate records, has been discussed by many authors based on records values. There are various definitions for multivariate records among which depth-based records have been selected for the aim of this paper. In this paper, by means of the maximum likelihood and conditional median methods, point and interval predictions of depth values which are related to the future depth-based multivariate records are considered on the basis of the observed ones. The observations derived from some elements of the elliptical distributions are the main reason of studying this problem. Finally, the satisfactory performance of the prediction methods is illustrated via some simulation studies and a real dataset about Kermanshah city drought.

Discriminative validity of the timed up and go test for community ambulation in persons with chronic stroke

  • An, Seung Heon;Park, Dae-Sung;Lim, Ji Young
    • Physical Therapy Rehabilitation Science
    • /
    • 제6권4호
    • /
    • pp.176-181
    • /
    • 2017
  • Objective: The timed up and go (TUG) test is method used to determine the functional mobility of persons with stroke. Its reliability, validity, reaction rate, fall prediction, and psychological characteristics concerning ambulation ability have been validated. However, the relationship between TUG performance and community ambulation ability is unclear. The purpose of this study was to investigate whether the TUG performance time could indicate community ambulation levels (CAL) differentially in persons with chronic stroke. Design: Cross-sectional study. Methods: Eighty-seven stroke patients had participated in this study. Based on the self-reporting survey results on the difficulties experienced when walking outdoors, the subjects were divided into the independent community ambulation (ICA) group (n=35) and the dependent community ambulation group (n=52). Based on the area under the curve (AUC), the discrimination validity of the TUG performance time was calculated for classifying CAL. The Binomial Logistic Regression Model was utilized to produce the likelihood ratio of selected TUG cut-off values for the distinguishing of community ambulation ability. Results: The selected TUG cut-off values and the area under the curve were <14.87 seconds (AUC=0.871, 95% confidence interval=0.797-0.945), representing a mid-level accuracy. Concerning the likelihood ratio of the selected TUG cut-off value, it was found that the group with TUG performance times shorter than 14.87 seconds showed a 2.889 times higher probability of ICA than those with a TUG score of 14.87 seconds or longer (p<0.05). Conclusions: The TUG can be viewed as an assessment tool that is capable of classifying CAL.