• 제목/요약/키워드: CSAM

검색결과 554건 처리시간 0.017초

Classical and Bayesian methods of estimation for power Lindley distribution with application to waiting time data

  • Sharma, Vikas Kumar;Singh, Sanjay Kumar;Singh, Umesh
    • Communications for Statistical Applications and Methods
    • /
    • 제24권3호
    • /
    • pp.193-209
    • /
    • 2017
  • The power Lindley distribution with some of its properties is considered in this article. Maximum likelihood, least squares, maximum product spacings, and Bayes estimators are proposed to estimate all the unknown parameters of the power Lindley distribution. Lindley's approximation and Markov chain Monte Carlo techniques are utilized for Bayesian calculations since posterior distribution cannot be reduced to standard distribution. The performances of the proposed estimators are compared based on simulated samples. The waiting times of research articles to be accepted in statistical journals are fitted to the power Lindley distribution with other competing distributions. Chi-square statistic, Kolmogorov-Smirnov statistic, Akaike information criterion and Bayesian information criterion are used to access goodness-of-fit. It was found that the power Lindley distribution gives a better fit for the data than other distributions.

A cautionary note on the use of Cook's distance

  • Kim, Myung Geun
    • Communications for Statistical Applications and Methods
    • /
    • 제24권3호
    • /
    • pp.317-324
    • /
    • 2017
  • An influence measure known as Cook's distance has been used for judging the influence of each observation on the least squares estimate of the parameter vector. The distance does not reflect the distributional property of the change in the least squares estimator of the regression coefficients due to case deletions: the distribution has a covariance matrix of rank one and thus it has a support set determined by a line in the multidimensional Euclidean space. As a result, the use of Cook's distance may fail to correctly provide information about influential observations, and we study some reasons for the failure. Three illustrative examples will be provided, in which the use of Cook's distance fails to give the right information about influential observations or it provides the right information about the most influential observation. We will seek some reasons for the wrong or right provision of information.

Bayesian analysis of random partition models with Laplace distribution

  • Kyung, Minjung
    • Communications for Statistical Applications and Methods
    • /
    • 제24권5호
    • /
    • pp.457-480
    • /
    • 2017
  • We develop a random partition procedure based on a Dirichlet process prior with Laplace distribution. Gibbs sampling of a Laplace mixture of linear mixed regressions with a Dirichlet process is implemented as a random partition model when the number of clusters is unknown. Our approach provides simultaneous partitioning and parameter estimation with the computation of classification probabilities, unlike its counterparts. A full Gibbs-sampling algorithm is developed for an efficient Markov chain Monte Carlo posterior computation. The proposed method is illustrated with simulated data and one real data of the energy efficiency of Tsanas and Xifara (Energy and Buildings, 49, 560-567, 2012).

Convergence rate of a test statistics observed by the longitudinal data with long memory

  • Kim, Yoon Tae;Park, Hyun Suk
    • Communications for Statistical Applications and Methods
    • /
    • 제24권5호
    • /
    • pp.481-492
    • /
    • 2017
  • This paper investigates a convergence rate of a test statistics given by two scale sampling method based on $A\ddot{i}t$-Sahalia and Jacod (Annals of Statistics, 37, 184-222, 2009). This statistics tests for longitudinal data having the existence of long memory dependence driven by fractional Brownian motion with Hurst parameter $H{\in}(1/2,\;1)$. We obtain an upper bound in the Kolmogorov distance for normal approximation of this test statistic. As a main tool for our works, the recent results in Nourdin and Peccati (Probability Theory and Related Fields, 145, 75-118, 2009; Annals of Probability, 37, 2231-2261, 2009) will be used. These results are obtained by employing techniques based on the combination between Malliavin calculus and Stein's method for normal approximation.

Bayesian analysis of financial volatilities addressing long-memory, conditional heteroscedasticity and skewed error distribution

  • Oh, Rosy;Shin, Dong Wan;Oh, Man-Suk
    • Communications for Statistical Applications and Methods
    • /
    • 제24권5호
    • /
    • pp.507-518
    • /
    • 2017
  • Volatility plays a crucial role in theory and applications of asset pricing, optimal portfolio allocation, and risk management. This paper proposes a combined model of autoregressive moving average (ARFIMA), generalized autoregressive conditional heteroscedasticity (GRACH), and skewed-t error distribution to accommodate important features of volatility data; long memory, heteroscedasticity, and asymmetric error distribution. A fully Bayesian approach is proposed to estimate the parameters of the model simultaneously, which yields parameter estimates satisfying necessary constraints in the model. The approach can be easily implemented using a free and user-friendly software JAGS to generate Markov chain Monte Carlo samples from the joint posterior distribution of the parameters. The method is illustrated by using a daily volatility index from Chicago Board Options Exchange (CBOE). JAGS codes for model specification is provided in the Appendix.

Fused sliced inverse regression in survival analysis

  • Yoo, Jae Keun
    • Communications for Statistical Applications and Methods
    • /
    • 제24권5호
    • /
    • pp.533-541
    • /
    • 2017
  • Sufficient dimension reduction (SDR) replaces original p-dimensional predictors to a lower-dimensional linearly transformed predictor. The sliced inverse regression (SIR) has the longest and most popular history of SDR methodologies. The critical weakness of SIR is its known sensitive to the numbers of slices. Recently, a fused sliced inverse regression is developed to overcome this deficit, which combines SIR kernel matrices constructed from various choices of the number of slices. In this paper, the fused sliced inverse regression and SIR are compared to show that the former has a practical advantage in survival regression over the latter. Numerical studies confirm this and real data example is presented.

Estimation of structural vector autoregressive models

  • Lutkepohl, Helmut
    • Communications for Statistical Applications and Methods
    • /
    • 제24권5호
    • /
    • pp.421-441
    • /
    • 2017
  • In this survey, estimation methods for structural vector autoregressive models are presented in a systematic way. Both frequentist and Bayesian methods are considered. Depending on the model setup and type of restrictions, least squares estimation, instrumental variables estimation, method-of-moments estimation and generalized method-of-moments are considered. The methods are presented in a unified framework that enables a practitioner to find the most suitable estimation method for a given model setup and set of restrictions. It is emphasized that specifying the identifying restrictions such that they are linear restrictions on the structural parameters is helpful. Examples are provided to illustrate alternative model setups, types of restrictions and the most suitable corresponding estimation methods.

Comparison of parameter estimation methods for normal inverse Gaussian distribution

  • Yoon, Jeongyoen;Kim, Jiyeon;Song, Seongjoo
    • Communications for Statistical Applications and Methods
    • /
    • 제27권1호
    • /
    • pp.97-108
    • /
    • 2020
  • This paper compares several methods for estimating parameters of normal inverse Gaussian distribution. Ordinary maximum likelihood estimation and the method of moment estimation often do not work properly due to restrictions on parameters. We examine the performance of adjusted estimation methods along with the ordinary maximum likelihood estimation and the method of moment estimation by simulation and real data application. We also see the effect of the initial value in estimation methods. The simulation results show that the ordinary maximum likelihood estimator is significantly affected by the initial value; in addition, the adjusted estimators have smaller root mean square error than ordinary estimators as well as less impact on the initial value. With real datasets, we obtain similar results to what we see in simulation studies. Based on the results of simulation and real data application, we suggest using adjusted maximum likelihood estimates with adjusted method of moment estimates as initial values to estimate the parameters of normal inverse Gaussian distribution.

Unified methods for variable selection and outlier detection in a linear regression

  • Seo, Han Son
    • Communications for Statistical Applications and Methods
    • /
    • 제26권6호
    • /
    • pp.575-582
    • /
    • 2019
  • The problem of selecting variables in the presence of outliers is considered. Variable selection and outlier detection are not separable problems because each observation affects the fitted regression equation differently and has a different influence on each variable. We suggest a simultaneous method for variable selection and outlier detection in a linear regression model. The suggested procedure uses a sequential method to detect outliers and uses all possible subset regressions for model selections. A simplified version of the procedure is also proposed to reduce the computational burden. The procedures are compared to other variable selection methods using real data sets known to contain outliers. Examples show that the proposed procedures are effective and superior to robust algorithms in selecting the best model.

Analysis of cause-of-death mortality and actuarial implications

  • Kwon, Hyuk-Sung;Nguyen, Vu Hai
    • Communications for Statistical Applications and Methods
    • /
    • 제26권6호
    • /
    • pp.557-573
    • /
    • 2019
  • Mortality study is an essential component of actuarial risk management for life insurance policies, annuities, and pension plans. Life expectancy has drastically increased over the last several decades; consequently, longevity risk associated with annuity products and pension systems has emerged as a crucial issue. Among the various aspects of mortality study, a consideration of the cause-of-death mortality can provide a more comprehensive understanding of the nature of mortality/longevity risk. In this case study, the cause-of-mortality data in Korea and the US were analyzed along with a multinomial logistic regression model that was constructed to quantify the impact of mortality reduction in a specific cause on actuarial values. The results of analyses imply that mortality improvement due to a specific cause should be carefully monitored and reflected in mortality/longevity risk management. It was also confirmed that multinomial logistic regression model is a useful tool for analyzing cause-of-death mortality for actuarial applications.