• Title/Summary/Keyword: Maximum likelihood model

Search Result 875, Processing Time 0.022 seconds

A Study on Estimation of Parameters in Bivariate Exponential Distribution

  • Kim, Jae Joo;Park, Byung-Gu
    • Journal of Korean Society for Quality Management
    • /
    • v.15 no.1
    • /
    • pp.20-32
    • /
    • 1987
  • Estimation for the parameters of a bivariate exponential (BVE) model of Marshall and Olkin (1967) is investigated for the cases of complete sampling and time-truncated parallel sampling. Maximum likelihood estimators, method of moment estimators and Bayes estimators for the parameters of a BVE model are obtained and compared with each other. A Monte Cario simulation study for a moderate sized samples indicates that the Bayes estimators of parameters perform better than their maximum likelihood and method of moment estimators.

  • PDF

Estimation of the Generalized Rayleigh Distribution Parameters

  • Al-khedhairi, A.;Sarhan, Ammar M.;Tadj, L.
    • International Journal of Reliability and Applications
    • /
    • v.8 no.2
    • /
    • pp.199-210
    • /
    • 2007
  • This paper presents estimations of the generalized Rayleigh distribution model based on grouped and censored data. The maximum likelihood method is used to derive point and asymptotic confidence estimates of the unknown parameters. The results obtained in this paper generalize some of those available in the literature. Finally, we test whether the current model fits a set of real data better than other models.

  • PDF

Estimation of Freund model under censored data

  • Cho, Kil-Ho
    • Journal of the Korean Data and Information Science Society
    • /
    • v.23 no.2
    • /
    • pp.403-409
    • /
    • 2012
  • We consider a life testing experiment in which several two-component shared parallel systems are put on test, and the test is terminated at a predesigned experiment time. In this thesis, the maximum likelihood estimators for parameters of Freund's bivariate exponential distribution under the system level life testing are obtained. Results of comparative studies based on Monte Carlo simulation are presented.

Analysis Approaches to Data of Both Age and Usage Attributes (시간과 사용량의 속성을 지닌 데이터의 분석방안)

  • Jo, Jin-Nam;Baik, Jai-Wook
    • Journal of Korean Society for Quality Management
    • /
    • v.35 no.1
    • /
    • pp.136-141
    • /
    • 2007
  • For many products failures depend on age and usage and, in this case, failures are random points in a two-dimensional plane with the two axes representing age and usage. Models play an important role in decision-making. In this research, an accelerate failure test (AFT) model is proposed to deal with the two-dimensional data. The parameters are proposed to be estimated through maximum likelihood estimators.

Estimating multiplicative competitive interaction model using kernel machine technique

  • Shim, Joo-Yong;Kim, Mal-Suk;Park, Hye-Jung
    • Journal of the Korean Data and Information Science Society
    • /
    • v.23 no.4
    • /
    • pp.825-832
    • /
    • 2012
  • We propose a novel way of forecasting the market shares of several brands simultaneously in a multiplicative competitive interaction model, which uses kernel regression technique incorporated with kernel machine technique applied in support vector machines and other machine learning techniques. Traditionally, the estimations of the market share attraction model are performed via a maximum likelihood estimation procedure under the assumption that the data are drawn from a normal distribution. The proposed method is shown to be a good candidate for forecasting method of the market share attraction model when normal distribution is not assumed. We apply the proposed method to forecast the market shares of 4 Korean car brands simultaneously and represent better performances than maximum likelihood estimation procedure.

Maximum Likelihood Estimation of Continuous-time Diffusion Models for Exchange Rates

  • Choi, Seungmoon;Lee, Jaebum
    • East Asian Economic Review
    • /
    • v.24 no.1
    • /
    • pp.61-87
    • /
    • 2020
  • Five diffusion models are estimated using three different foreign exchange rates to find an appropriate model for each. Daily spot exchange rates expressed as the prices of 1 euro, 1 British pound and 100 Japanese yen in US dollars, respectively denoted by USD/EUR, USD/GBP, and USD/100JPY, are used. The maximum likelihood estimation method is implemented after deriving an approximate log-transition density function (log-TDF) of the diffusion processes because the true log-TDF is unknown. Of the five models, the most general model is the best fit for the USD/GBP, and USD/100JPY exchange rates, but it is not the case for the case of USD/EUR. Although we could not find any evidence of the mean-reverting property for the USD/EUR exchange rate, the USD/GBP, and USD/100JPY exchange rates show the mean-reversion behavior. Interestingly, the volatility function of the USD/EUR exchange rate is increasing in the exchange rate while the volatility functions of the USD/GBP and USD/100Yen exchange rates have a U-shape. Our results reveal that more care has to be taken when determining a diffusion model for the exchange rate. The results also imply that we may have to use a more general diffusion model than those proposed in the literature when developing economic theories for the behavior of the exchange rate and pricing foreign currency options or derivatives.

Improvement of Basis-Screening-Based Dynamic Kriging Model Using Penalized Maximum Likelihood Estimation (페널티 적용 최대 우도 평가를 통한 기저 스크리닝 기반 크리깅 모델 개선)

  • Min-Geun Kim;Jaeseung Kim;Jeongwoo Han;Geun-Ho Lee
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.36 no.6
    • /
    • pp.391-398
    • /
    • 2023
  • In this paper, a penalized maximum likelihood estimation (PMLE) method that applies a penalty to increase the accuracy of a basis-screening-based Kriging model (BSKM) is introduced. The maximum order and set of basis functions used in the BSKM are determined according to their importance. In this regard, the cross-validation error (CVE) for the basis functions is employed as an indicator of importance. When constructing the Kriging model (KM), the maximum order of basis functions is determined, the importance of each basis function is evaluated according to the corresponding maximum order, and finally the optimal set of basis functions is determined. This optimal set is created by adding basis functions one by one in order of importance until the CVE of the KM is minimized. In this process, the KM must be generated repeatedly. Simultaneously, hyper-parameters representing correlations between datasets must be calculated through the maximum likelihood evaluation method. Given that the optimal set of basis functions depends on such hyper-parameters, it has a significant impact on the accuracy of the KM. The PMLE method is applied to accurately calculate hyper-parameters. It was confirmed that the accuracy of a BSKM can be improved by applying it to Branin-Hoo problem.

Target Detection Performance in a Clutter Environment Based on the Generalized Likelihood Ratio Test (클러터 환경에서의 GLRT 기반 표적 탐지성능)

  • Suh, Jin-Bae;Chun, Joo-Hwan;Jung, Ji-Hyun;Kim, Jin-Uk
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.30 no.5
    • /
    • pp.365-372
    • /
    • 2019
  • We propose a method to estimate unknown parameters(e.g., target amplitude and clutter parameters) in the generalized likelihood ratio test(GLRT) using maximum likelihood estimation and the Newton-Raphson method. When detecting targets in a clutter environ- ment, it is important to establish a modular model of clutter similar to the actual environment. These correlated clutter models can be generated using spherically invariant random vectors. We obtain the GLRT of the generated clutter model and check its detection probability using estimated parameters.

Subsidiary Maximum Likelihood Iterative Decoding Based on Extrinsic Information

  • Yang, Fengfan;Le-Ngoc, Tho
    • Journal of Communications and Networks
    • /
    • v.9 no.1
    • /
    • pp.1-10
    • /
    • 2007
  • This paper proposes a multimodal generalized Gaussian distribution (MGGD) to effectively model the varying statistical properties of the extrinsic information. A subsidiary maximum likelihood decoding (MLD) algorithm is subsequently developed to dynamically select the most suitable MGGD parameters to be used in the component maximum a posteriori (MAP) decoders at each decoding iteration to derive the more reliable metrics performance enhancement. Simulation results show that, for a wide range of block lengths, the proposed approach can enhance the overall turbo decoding performance for both parallel and serially concatenated codes in additive white Gaussian noise (AWGN), Rician, and Rayleigh fading channels.

Application of the Weibull-Poisson long-term survival model

  • Vigas, Valdemiro Piedade;Mazucheli, Josmar;Louzada, Francisco
    • Communications for Statistical Applications and Methods
    • /
    • v.24 no.4
    • /
    • pp.325-337
    • /
    • 2017
  • In this paper, we proposed a new long-term lifetime distribution with four parameters inserted in a risk competitive scenario with decreasing, increasing and unimodal hazard rate functions, namely the Weibull-Poisson long-term distribution. This new distribution arises from a scenario of competitive latent risk, in which the lifetime associated to the particular risk is not observable, and where only the minimum lifetime value among all risks is noticed in a long-term context. However, it can also be used in any other situation as long as it fits the data well. The Weibull-Poisson long-term distribution is presented as a particular case for the new exponential-Poisson long-term distribution and Weibull long-term distribution. The properties of the proposed distribution were discussed, including its probability density, survival and hazard functions and explicit algebraic formulas for its order statistics. Assuming censored data, we considered the maximum likelihood approach for parameter estimation. For different parameter settings, sample sizes, and censoring percentages various simulation studies were performed to study the mean square error of the maximum likelihood estimative, and compare the performance of the model proposed with the particular cases. The selection criteria Akaike information criterion, Bayesian information criterion, and likelihood ratio test were used for the model selection. The relevance of the approach was illustrated on two real datasets of where the new model was compared with its particular cases observing its potential and competitiveness.