• Title/Summary/Keyword: Likelihood function

Search Result 605, Processing Time 0.027 seconds

Bandwidth selection for discontinuity point estimation in density (확률밀도함수의 불연속점 추정을 위한 띠폭 선택)

  • Huh, Jib
    • Journal of the Korean Data and Information Science Society
    • /
    • v.23 no.1
    • /
    • pp.79-87
    • /
    • 2012
  • In the case that the probability density function has a discontinuity point, Huh (2002) estimated the location and jump size of the discontinuity point based on the difference between the right and left kernel density estimators using the one-sided kernel function. In this paper, we consider the cross-validation, made by the right and left maximum likelihood cross-validations, for the bandwidth selection in order to estimate the location and jump size of the discontinuity point. This method is motivated by the one-sided cross-validation of Hart and Yi (1998). The finite sample performance is illustrated by simulated example.

Review of Classification Models for Reliability Distributions from the Perspective of Practical Implementation (실무적 적용 관점에서 신뢰성 분포의 유형화 모형의 고찰)

  • Choi, Sung-Woon
    • Journal of the Korea Safety Management & Science
    • /
    • v.13 no.1
    • /
    • pp.195-202
    • /
    • 2011
  • The study interprets each of three classification models based on Bath-Tub Failure Rate (BTFR), Extreme Value Distribution (EVD) and Conjugate Bayesian Distribution (CBD). The classification model based on BTFR is analyzed by three failure patterns of decreasing, constant, or increasing which utilize systematic management strategies for reliability of time. Distribution model based on BTFR is identified using individual factors for each of three corresponding cases. First, in case of using shape parameter, the distribution based on BTFR is analyzed with a factor of component or part number. In case of using scale parameter, the distribution model based on BTFR is analyzed with a factor of time precision. Meanwhile, in case of using location parameter, the distribution model based on BTFR is analyzed with a factor of guarantee time. The classification model based on EVD is assorted into long-tailed distribution, medium-tailed distribution, and short-tailed distribution by the length of right-tail in distribution, and depended on asymptotic reliability property which signifies skewness and kurtosis of distribution curve. Furthermore, the classification model based on CBD is relied upon conjugate distribution relations between prior function, likelihood function and posterior function for dimension reduction and easy tractability under the occasion of Bayesian posterior updating.

Parametric survival model based on the Lévy distribution

  • Valencia-Orozco, Andrea;Tovar-Cuevas, Jose R.
    • Communications for Statistical Applications and Methods
    • /
    • v.26 no.5
    • /
    • pp.445-461
    • /
    • 2019
  • It is possible that data are not always fitted with sufficient precision by the existing distributions; therefore this article presents a methodology that enables the use of families of asymmetric distributions as alternative probabilistic models for survival analysis, with censorship on the right, different from those usually studied (the Exponential, Gamma, Weibull, and Lognormal distributions). We use a more flexible parametric model in terms of density behavior, assuming that data can be fit by a distribution of stable distribution families considered unconventional in the analyses of survival data that are appropriate when extreme values occur, with small probabilities that should not be ignored. In the methodology, the determination of the analytical expression of the risk function h(t) of the $L{\acute{e}}vy$ distribution is included, as it is not usually reported in the literature. A simulation was conducted to evaluate the performance of the candidate distribution when modeling survival times, including the estimation of parameters via the maximum likelihood method, survival function ${\hat{S}}$(t) and Kaplan-Meier estimator. The obtained estimates did not exhibit significant changes for different sample sizes and censorship fractions in the sample. To illustrate the usefulness of the proposed methodology, an application with real data, regarding the survival times of patients with colon cancer, was considered.

A study on the uncertainty analysis of LENS-GRM using formal and informal likelihood measure (정형·비정형 우도를 이용한 LENS-GRM 불확실성 해석)

  • Lee, Sang Hyup;Choo, Inn Kyo;Yu, Yeong Uk;Jung, Younghun
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2020.06a
    • /
    • pp.317-317
    • /
    • 2020
  • 수재해는 수자원 인프라의 부족 및 관리 미흡 등 많은 요인들이 있지만 강우의 유무와 크기가 가장 원초적인 요인들 중 하나이다. 정확한 강우량 추정 및 강우발생시간 예측은 수재해로 인한 피해를 예방하고 빠르게 대처할 수 있다. 그러나 강우예측에는 많은 불확실성을 내포하고 있기 때문에 이러한 불확실성을 이해하고 줄여 나가는 것이 필요하다. 최근 컴퓨터의 성능의 발전에 비례해 강우 예측 자료들도 점진적으로 발전을 거듭하고 있다. 이를 강우-유출 모형에 적용시 유출량 예측의 정확성 또한 비례하여 한층 더 발전할 수 있을 것이다. 하지만 신뢰성이 낮은 입력자료를 대상으로 하는 유출해석 모형은 많은 불확실성을 내포할 것이다. 따라서 본 연구에서는 위천 유역에 대해 LENS(Limited area ENsemble prediction System) 강우앙상블 예측자료의 적용성을 검토하고 그리드 기반 강우 유출 모델 GRM(Grid based Rainfall-runoff Model) 에 적용하여 유출예측의 불확실성을 평가하고자 하였다. 또한 강우예측 및 유출예측은 수 많은 매개변수를 포함하며 최종적인 예측은 더 큰 불확실한 범위로 산출될 수 있다. 이에 따라 본 연구에서는 Python3 기반 코딩으로 LENS 자료 구축 및 GRM 모형의 매개변수 보정을 각 2000회 씩에 걸쳐 총 2회 실시하여 수문학적, 지형학적 인자에 따른 불확실성 범위를 보정하고자 하였다. 매개변수의 보정은 비정형우도(Informal likelihood) NSE, 정형우도(Formal likelihood) Lognormal(Log-likelihood function)의 우도에 따른 행위모델을 산정하여 보정하였다. 따라서 본 연구에서는 선행연구들을 참고한 정형, 비정형 우도의 임계치를 이용한 불확실성해석에 적용하였으며 이는 사용자의 행위모델선정 임계치 범위 선정으로 인한 불확실성을 줄여나감에 기여할 수 있을것으로 사료된다.

  • PDF

Bayesian Estimation of the Reliability Function of the Burr Type XII Model under Asymmetric Loss Function

  • Kim, Chan-Soo
    • Communications for Statistical Applications and Methods
    • /
    • v.14 no.2
    • /
    • pp.389-399
    • /
    • 2007
  • In this paper, Bayes estimates for the parameters k, c and reliability function of the Burr type XII model based on a type II censored samples under asymmetric loss functions viz., LINEX and SQUAREX loss functions are obtained. An approximation based on the Laplace approximation method (Tierney and Kadane, 1986) is used for obtaining the Bayes estimators of the parameters and reliability function. In order to compare the Bayes estimators under squared error loss, LINEX and SQUAREX loss functions respectively and the maximum likelihood estimator of the parameters and reliability function, Monte Carlo simulations are used.

An Empirical Characteristic Function Approach to Selecting a Transformation to Normality

  • Yeo, In-Kwon;Johnson, Richard A.;Deng, XinWei
    • Communications for Statistical Applications and Methods
    • /
    • v.21 no.3
    • /
    • pp.213-224
    • /
    • 2014
  • In this paper, we study the problem of transforming to normality. We propose to estimate the transformation parameter by minimizing a weighted squared distance between the empirical characteristic function of transformed data and the characteristic function of the normal distribution. Our approach also allows for other symmetric target characteristic functions. Asymptotics are established for a random sample selected from an unknown distribution. The proofs show that the weight function $t^{-2}$ needs to be modified to have thinner tails. We also propose the method to compute the influence function for M-equation taking the form of U-statistics. The influence function calculations and a small Monte Carlo simulation show that our estimates are less sensitive to a few outliers than the maximum likelihood estimates.

A Study for NHPP software Reliability Growth Model based on polynomial hazard function (다항 위험함수에 근거한 NHPP 소프트웨어 신뢰성장모형에 관한 연구)

  • Kim, Hee Cheul
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.7 no.4
    • /
    • pp.7-14
    • /
    • 2011
  • Infinite failure NHPP models presented in the literature exhibit either constant, monotonic increasing or monotonic decreasing failure occurrence rate per fault (hazard function). This infinite non-homogeneous Poisson process is model which reflects the possibility of introducing new faults when correcting or modifying the software. In this paper, polynomial hazard function have been proposed, which can efficiency application for software reliability. Algorithm for estimating the parameters used to maximum likelihood estimator and bisection method. Model selection based on mean square error and the coefficient of determination for the sake of efficient model were employed. In numerical example, log power time model of the existing model in this area and the polynomial hazard function model were compared using failure interval time. Because polynomial hazard function model is more efficient in terms of reliability, polynomial hazard function model as an alternative to the existing model also were able to confirm that can use in this area.

Estimations of the skew parameter in a skewed double power function distribution

  • Kang, Jun-Ho;Lee, Chang-Soo
    • Journal of the Korean Data and Information Science Society
    • /
    • v.24 no.4
    • /
    • pp.901-909
    • /
    • 2013
  • A skewed double power function distribution is defined by a double power function distribution. We shall evaluate the coefficient of the skewness of a skewed double power function distribution. We shall obtain an approximate maximum likelihood estimator (MLE) and a moment estimator (MME) of the skew parameter in the skewed double power function distribution, and compare simulated mean squared errors for those estimators. And we shall compare simulated MSEs of two proposed reliability estimators in two independent skewed double power function distributions with different skew parameters.

A Study on Blind Nonlinear Channel Equalization using Modified Fuzzy C-Means (개선된 퍼지 클러스터 알고리즘을 이용한 블라인드 비선형 채널등화에 관한 연구)

  • Park, Sung-Dae;Han, Soo-Whan
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.10
    • /
    • pp.1284-1294
    • /
    • 2007
  • In this paper, a blind nonlinear channel equalization is implemented by using a Modified Fuzzy C-Means (MFCM) algorithm. The proposed MFCM searches the optimal channel output states of a nonlinear channel from the received symbols, based on the Bayesian likelihood fitness function instead of a conventional Euclidean distance measure. Next, the desired channel states of a nonlinear channel are constructed with the elements of estimated channel output states, and placed at the center of a Radial Basis Function (RBF) equalizer to reconstruct transmitted symbols. In the simulations, binary signals are generated at random with Gaussian noise. The performance of the proposed method is compared with that of a hybrid genetic algorithm (GA merged with simulated annealing (SA): GASA), and the relatively high accuracy and fast searching speed are achieved.

  • PDF

Application of Conjugate Distribution using Deductive and Inductive Reasoning in Quality and Reliability Tools (품질 및 신뢰성 기법에서 연역 및 귀납 추론에 의한 Conjugate 분포의 적용)

  • Choi, Sung-Woon
    • Proceedings of the Safety Management and Science Conference
    • /
    • 2010.11a
    • /
    • pp.27-33
    • /
    • 2010
  • The paper proposes the guidelines of application and interpretation for quality and reliability methodologies using deductive or inductive reasoning. The research also reviews Bayesian quality and reliability tools by deductive prior function and inductive posterior function.

  • PDF